2026-03-09T21:04:54.196 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-09T21:04:54.200 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-09T21:04:54.230 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/658 branch: squid description: orch/cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} email: null first_in_suite: false flavor: default job_id: '658' ktype: distro last_in_suite: false machine_type: vps name: kyr-2026-03-09_11:23:05-orch-squid-none-default-vps no_nested_subset: false openstack: - volumes: count: 4 size: 10 os_type: ubuntu os_version: '22.04' overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: global: mon election default strategy: 3 ms bind msgr1: false ms bind msgr2: true ms type: async mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 osd shutdown pgref assert: true flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - but it is still running - overall HEALTH_ - \(OSDMAP_FLAGS\) - \(PG_ - \(OSD_ - \(OBJECT_ - \(POOL_APP_NOT_ENABLED\) log-only-match: - CEPHADM_ sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} cephadm: cephadm_mode: root install: ceph: extra_system_packages: deb: - python3-pytest rpm: - python3-pytest flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath workunit: branch: tt-squid sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - mon.a - mon.c - mgr.y - osd.0 - osd.1 - osd.2 - osd.3 - client.0 - ceph.rgw.foo.a - node-exporter.a - alertmanager.a - - mon.b - mgr.x - osd.4 - osd.5 - osd.6 - osd.7 - client.1 - prometheus.a - grafana.a - node-exporter.b - ceph.iscsi.iscsi.a seed: 3443 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 targets: vm07.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEaIWl9bajzxh2caC+EQ3HDBnVb83mNcfCWwU/Ylbf/GyIPqhbh0m5Htz9NjKBfbW5E3GkIM92ZCrhr5leu79rQ= vm10.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKQZxBeNQdlvIwvoaLJOo0AdfM2TtOaOnTJkiOXgkpTMU8UpCgZTYgOzbp/OrzvMUBmOZXsUDxKvMf5TM+t54rs= tasks: - install: null - cephadm: conf: mgr: debug mgr: 20 debug ms: 1 - workunit: clients: client.0: - rados/test_python.sh timeout: 1h teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-09_11:23:05 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-09T21:04:54.230 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa; will attempt to use it 2026-03-09T21:04:54.231 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks 2026-03-09T21:04:54.231 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-09T21:04:54.231 INFO:teuthology.task.internal:Checking packages... 2026-03-09T21:04:54.231 INFO:teuthology.task.internal:Checking packages for os_type 'ubuntu', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-09T21:04:54.232 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-09T21:04:54.232 INFO:teuthology.packaging:ref: None 2026-03-09T21:04:54.232 INFO:teuthology.packaging:tag: None 2026-03-09T21:04:54.232 INFO:teuthology.packaging:branch: squid 2026-03-09T21:04:54.232 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T21:04:54.232 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=squid 2026-03-09T21:04:54.843 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678-ge911bdeb-1jammy 2026-03-09T21:04:54.844 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-09T21:04:54.888 INFO:teuthology.task.internal:no buildpackages task found 2026-03-09T21:04:54.889 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-09T21:04:54.900 INFO:teuthology.task.internal:Saving configuration 2026-03-09T21:04:54.905 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-09T21:04:54.906 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-09T21:04:54.915 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm07.local', 'description': '/archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/658', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-09 21:03:44.117014', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:07', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEaIWl9bajzxh2caC+EQ3HDBnVb83mNcfCWwU/Ylbf/GyIPqhbh0m5Htz9NjKBfbW5E3GkIM92ZCrhr5leu79rQ='} 2026-03-09T21:04:54.923 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm10.local', 'description': '/archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/658', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-09 21:03:44.117463', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:0a', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKQZxBeNQdlvIwvoaLJOo0AdfM2TtOaOnTJkiOXgkpTMU8UpCgZTYgOzbp/OrzvMUBmOZXsUDxKvMf5TM+t54rs='} 2026-03-09T21:04:54.923 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-09T21:04:54.924 INFO:teuthology.task.internal:roles: ubuntu@vm07.local - ['mon.a', 'mon.c', 'mgr.y', 'osd.0', 'osd.1', 'osd.2', 'osd.3', 'client.0', 'ceph.rgw.foo.a', 'node-exporter.a', 'alertmanager.a'] 2026-03-09T21:04:54.924 INFO:teuthology.task.internal:roles: ubuntu@vm10.local - ['mon.b', 'mgr.x', 'osd.4', 'osd.5', 'osd.6', 'osd.7', 'client.1', 'prometheus.a', 'grafana.a', 'node-exporter.b', 'ceph.iscsi.iscsi.a'] 2026-03-09T21:04:54.924 INFO:teuthology.run_tasks:Running task console_log... 2026-03-09T21:04:54.932 DEBUG:teuthology.task.console_log:vm07 does not support IPMI; excluding 2026-03-09T21:04:54.938 DEBUG:teuthology.task.console_log:vm10 does not support IPMI; excluding 2026-03-09T21:04:54.939 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7fcab447a170>, signals=[15]) 2026-03-09T21:04:54.939 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-09T21:04:54.939 INFO:teuthology.task.internal:Opening connections... 2026-03-09T21:04:54.940 DEBUG:teuthology.task.internal:connecting to ubuntu@vm07.local 2026-03-09T21:04:54.940 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm07.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T21:04:55.002 DEBUG:teuthology.task.internal:connecting to ubuntu@vm10.local 2026-03-09T21:04:55.003 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm10.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T21:04:55.059 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-09T21:04:55.060 DEBUG:teuthology.orchestra.run.vm07:> uname -m 2026-03-09T21:04:55.076 INFO:teuthology.orchestra.run.vm07.stdout:x86_64 2026-03-09T21:04:55.077 DEBUG:teuthology.orchestra.run.vm07:> cat /etc/os-release 2026-03-09T21:04:55.119 INFO:teuthology.orchestra.run.vm07.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-09T21:04:55.119 INFO:teuthology.orchestra.run.vm07.stdout:NAME="Ubuntu" 2026-03-09T21:04:55.119 INFO:teuthology.orchestra.run.vm07.stdout:VERSION_ID="22.04" 2026-03-09T21:04:55.119 INFO:teuthology.orchestra.run.vm07.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-09T21:04:55.119 INFO:teuthology.orchestra.run.vm07.stdout:VERSION_CODENAME=jammy 2026-03-09T21:04:55.119 INFO:teuthology.orchestra.run.vm07.stdout:ID=ubuntu 2026-03-09T21:04:55.119 INFO:teuthology.orchestra.run.vm07.stdout:ID_LIKE=debian 2026-03-09T21:04:55.120 INFO:teuthology.orchestra.run.vm07.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-09T21:04:55.120 INFO:teuthology.orchestra.run.vm07.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-09T21:04:55.120 INFO:teuthology.orchestra.run.vm07.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-09T21:04:55.120 INFO:teuthology.orchestra.run.vm07.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-09T21:04:55.120 INFO:teuthology.orchestra.run.vm07.stdout:UBUNTU_CODENAME=jammy 2026-03-09T21:04:55.120 INFO:teuthology.lock.ops:Updating vm07.local on lock server 2026-03-09T21:04:55.124 DEBUG:teuthology.orchestra.run.vm10:> uname -m 2026-03-09T21:04:55.127 INFO:teuthology.orchestra.run.vm10.stdout:x86_64 2026-03-09T21:04:55.127 DEBUG:teuthology.orchestra.run.vm10:> cat /etc/os-release 2026-03-09T21:04:55.172 INFO:teuthology.orchestra.run.vm10.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-09T21:04:55.172 INFO:teuthology.orchestra.run.vm10.stdout:NAME="Ubuntu" 2026-03-09T21:04:55.172 INFO:teuthology.orchestra.run.vm10.stdout:VERSION_ID="22.04" 2026-03-09T21:04:55.172 INFO:teuthology.orchestra.run.vm10.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-09T21:04:55.172 INFO:teuthology.orchestra.run.vm10.stdout:VERSION_CODENAME=jammy 2026-03-09T21:04:55.172 INFO:teuthology.orchestra.run.vm10.stdout:ID=ubuntu 2026-03-09T21:04:55.172 INFO:teuthology.orchestra.run.vm10.stdout:ID_LIKE=debian 2026-03-09T21:04:55.172 INFO:teuthology.orchestra.run.vm10.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-09T21:04:55.172 INFO:teuthology.orchestra.run.vm10.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-09T21:04:55.172 INFO:teuthology.orchestra.run.vm10.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-09T21:04:55.172 INFO:teuthology.orchestra.run.vm10.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-09T21:04:55.172 INFO:teuthology.orchestra.run.vm10.stdout:UBUNTU_CODENAME=jammy 2026-03-09T21:04:55.172 INFO:teuthology.lock.ops:Updating vm10.local on lock server 2026-03-09T21:04:55.175 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-09T21:04:55.177 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-09T21:04:55.178 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-09T21:04:55.178 DEBUG:teuthology.orchestra.run.vm07:> test '!' -e /home/ubuntu/cephtest 2026-03-09T21:04:55.179 DEBUG:teuthology.orchestra.run.vm10:> test '!' -e /home/ubuntu/cephtest 2026-03-09T21:04:55.215 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-09T21:04:55.216 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-09T21:04:55.216 DEBUG:teuthology.orchestra.run.vm07:> test -z $(ls -A /var/lib/ceph) 2026-03-09T21:04:55.226 DEBUG:teuthology.orchestra.run.vm10:> test -z $(ls -A /var/lib/ceph) 2026-03-09T21:04:55.228 INFO:teuthology.orchestra.run.vm07.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-09T21:04:55.260 INFO:teuthology.orchestra.run.vm10.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-09T21:04:55.260 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-09T21:04:55.266 DEBUG:teuthology.orchestra.run.vm07:> test -e /ceph-qa-ready 2026-03-09T21:04:55.272 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T21:04:55.529 DEBUG:teuthology.orchestra.run.vm10:> test -e /ceph-qa-ready 2026-03-09T21:04:55.532 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T21:04:55.764 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-09T21:04:55.765 INFO:teuthology.task.internal:Creating test directory... 2026-03-09T21:04:55.765 DEBUG:teuthology.orchestra.run.vm07:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-09T21:04:55.766 DEBUG:teuthology.orchestra.run.vm10:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-09T21:04:55.769 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-09T21:04:55.770 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-09T21:04:55.771 INFO:teuthology.task.internal:Creating archive directory... 2026-03-09T21:04:55.772 DEBUG:teuthology.orchestra.run.vm07:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-09T21:04:55.813 DEBUG:teuthology.orchestra.run.vm10:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-09T21:04:55.818 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-09T21:04:55.820 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-09T21:04:55.820 DEBUG:teuthology.orchestra.run.vm07:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-09T21:04:55.859 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T21:04:55.859 DEBUG:teuthology.orchestra.run.vm10:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-09T21:04:55.862 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T21:04:55.862 DEBUG:teuthology.orchestra.run.vm07:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-09T21:04:55.901 DEBUG:teuthology.orchestra.run.vm10:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-09T21:04:55.908 INFO:teuthology.orchestra.run.vm07.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T21:04:55.912 INFO:teuthology.orchestra.run.vm07.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T21:04:55.918 INFO:teuthology.orchestra.run.vm10.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T21:04:55.924 INFO:teuthology.orchestra.run.vm10.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T21:04:55.925 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-09T21:04:55.927 INFO:teuthology.task.internal:Configuring sudo... 2026-03-09T21:04:55.927 DEBUG:teuthology.orchestra.run.vm07:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-09T21:04:55.957 DEBUG:teuthology.orchestra.run.vm10:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-09T21:04:55.978 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-09T21:04:55.981 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-09T21:04:55.981 DEBUG:teuthology.orchestra.run.vm07:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-09T21:04:56.009 DEBUG:teuthology.orchestra.run.vm10:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-09T21:04:56.025 DEBUG:teuthology.orchestra.run.vm07:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T21:04:56.055 DEBUG:teuthology.orchestra.run.vm07:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T21:04:56.099 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T21:04:56.099 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-09T21:04:56.147 DEBUG:teuthology.orchestra.run.vm10:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T21:04:56.151 DEBUG:teuthology.orchestra.run.vm10:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T21:04:56.197 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-09T21:04:56.197 DEBUG:teuthology.orchestra.run.vm10:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-09T21:04:56.251 DEBUG:teuthology.orchestra.run.vm07:> sudo service rsyslog restart 2026-03-09T21:04:56.252 DEBUG:teuthology.orchestra.run.vm10:> sudo service rsyslog restart 2026-03-09T21:04:56.312 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-09T21:04:56.314 INFO:teuthology.task.internal:Starting timer... 2026-03-09T21:04:56.314 INFO:teuthology.run_tasks:Running task pcp... 2026-03-09T21:04:56.317 INFO:teuthology.run_tasks:Running task selinux... 2026-03-09T21:04:56.319 INFO:teuthology.task.selinux:Excluding vm07: VMs are not yet supported 2026-03-09T21:04:56.319 INFO:teuthology.task.selinux:Excluding vm10: VMs are not yet supported 2026-03-09T21:04:56.319 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-09T21:04:56.319 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-09T21:04:56.319 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-09T21:04:56.319 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-09T21:04:56.320 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-09T21:04:56.321 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-09T21:04:56.322 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-09T21:04:56.942 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-09T21:04:56.947 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-09T21:04:56.948 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventoryjms5fup8 --limit vm07.local,vm10.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-09T21:07:30.456 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm07.local'), Remote(name='ubuntu@vm10.local')] 2026-03-09T21:07:30.456 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm07.local' 2026-03-09T21:07:30.457 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm07.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T21:07:30.521 DEBUG:teuthology.orchestra.run.vm07:> true 2026-03-09T21:07:30.745 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm07.local' 2026-03-09T21:07:30.745 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm10.local' 2026-03-09T21:07:30.745 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm10.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T21:07:30.804 DEBUG:teuthology.orchestra.run.vm10:> true 2026-03-09T21:07:31.017 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm10.local' 2026-03-09T21:07:31.017 INFO:teuthology.run_tasks:Running task clock... 2026-03-09T21:07:31.019 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-09T21:07:31.019 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-09T21:07:31.019 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T21:07:31.021 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-09T21:07:31.021 DEBUG:teuthology.orchestra.run.vm10:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T21:07:31.035 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:31 ntpd[16162]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-09T21:07:31.035 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:31 ntpd[16162]: Command line: ntpd -gq 2026-03-09T21:07:31.035 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:31 ntpd[16162]: ---------------------------------------------------- 2026-03-09T21:07:31.035 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:31 ntpd[16162]: ntp-4 is maintained by Network Time Foundation, 2026-03-09T21:07:31.035 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:31 ntpd[16162]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-09T21:07:31.035 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:31 ntpd[16162]: corporation. Support and training for ntp-4 are 2026-03-09T21:07:31.035 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:31 ntpd[16162]: available at https://www.nwtime.org/support 2026-03-09T21:07:31.035 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:31 ntpd[16162]: ---------------------------------------------------- 2026-03-09T21:07:31.035 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:31 ntpd[16162]: proto: precision = 0.030 usec (-25) 2026-03-09T21:07:31.036 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:31 ntpd[16162]: basedate set to 2022-02-04 2026-03-09T21:07:31.036 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:31 ntpd[16162]: gps base set to 2022-02-06 (week 2196) 2026-03-09T21:07:31.036 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:31 ntpd[16162]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-09T21:07:31.036 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:31 ntpd[16162]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-09T21:07:31.036 INFO:teuthology.orchestra.run.vm07.stderr: 9 Mar 21:07:31 ntpd[16162]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 72 days ago 2026-03-09T21:07:31.037 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:31 ntpd[16162]: Listen and drop on 0 v6wildcard [::]:123 2026-03-09T21:07:31.037 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:31 ntpd[16162]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-09T21:07:31.037 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:31 ntpd[16162]: Listen normally on 2 lo 127.0.0.1:123 2026-03-09T21:07:31.037 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:31 ntpd[16162]: Listen normally on 3 ens3 192.168.123.107:123 2026-03-09T21:07:31.037 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:31 ntpd[16162]: Listen normally on 4 lo [::1]:123 2026-03-09T21:07:31.037 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:31 ntpd[16162]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:7%2]:123 2026-03-09T21:07:31.037 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:31 ntpd[16162]: Listening on routing socket on fd #22 for interface updates 2026-03-09T21:07:31.077 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:31 ntpd[16113]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-09T21:07:31.077 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:31 ntpd[16113]: Command line: ntpd -gq 2026-03-09T21:07:31.077 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:31 ntpd[16113]: ---------------------------------------------------- 2026-03-09T21:07:31.077 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:31 ntpd[16113]: ntp-4 is maintained by Network Time Foundation, 2026-03-09T21:07:31.077 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:31 ntpd[16113]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-09T21:07:31.077 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:31 ntpd[16113]: corporation. Support and training for ntp-4 are 2026-03-09T21:07:31.077 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:31 ntpd[16113]: available at https://www.nwtime.org/support 2026-03-09T21:07:31.077 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:31 ntpd[16113]: ---------------------------------------------------- 2026-03-09T21:07:31.077 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:31 ntpd[16113]: proto: precision = 0.030 usec (-25) 2026-03-09T21:07:31.078 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:31 ntpd[16113]: basedate set to 2022-02-04 2026-03-09T21:07:31.078 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:31 ntpd[16113]: gps base set to 2022-02-06 (week 2196) 2026-03-09T21:07:31.078 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:31 ntpd[16113]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-09T21:07:31.078 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:31 ntpd[16113]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-09T21:07:31.078 INFO:teuthology.orchestra.run.vm10.stderr: 9 Mar 21:07:31 ntpd[16113]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 72 days ago 2026-03-09T21:07:31.079 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:31 ntpd[16113]: Listen and drop on 0 v6wildcard [::]:123 2026-03-09T21:07:31.079 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:31 ntpd[16113]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-09T21:07:31.079 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:31 ntpd[16113]: Listen normally on 2 lo 127.0.0.1:123 2026-03-09T21:07:31.079 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:31 ntpd[16113]: Listen normally on 3 ens3 192.168.123.110:123 2026-03-09T21:07:31.079 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:31 ntpd[16113]: Listen normally on 4 lo [::1]:123 2026-03-09T21:07:31.080 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:31 ntpd[16113]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:a%2]:123 2026-03-09T21:07:31.080 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:31 ntpd[16113]: Listening on routing socket on fd #22 for interface updates 2026-03-09T21:07:32.036 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:32 ntpd[16162]: Soliciting pool server 139.144.71.56 2026-03-09T21:07:32.078 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:32 ntpd[16113]: Soliciting pool server 139.144.71.56 2026-03-09T21:07:33.036 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:33 ntpd[16162]: Soliciting pool server 82.165.178.31 2026-03-09T21:07:33.036 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:33 ntpd[16162]: Soliciting pool server 88.99.76.254 2026-03-09T21:07:33.077 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:33 ntpd[16113]: Soliciting pool server 82.165.178.31 2026-03-09T21:07:33.077 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:33 ntpd[16113]: Soliciting pool server 88.99.76.254 2026-03-09T21:07:34.036 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:34 ntpd[16162]: Soliciting pool server 185.216.176.59 2026-03-09T21:07:34.036 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:34 ntpd[16162]: Soliciting pool server 148.251.235.164 2026-03-09T21:07:34.036 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:34 ntpd[16162]: Soliciting pool server 139.162.156.95 2026-03-09T21:07:34.076 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:34 ntpd[16113]: Soliciting pool server 185.216.176.59 2026-03-09T21:07:34.076 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:34 ntpd[16113]: Soliciting pool server 148.251.235.164 2026-03-09T21:07:34.076 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:34 ntpd[16113]: Soliciting pool server 139.162.156.95 2026-03-09T21:07:35.036 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:35 ntpd[16162]: Soliciting pool server 78.46.87.46 2026-03-09T21:07:35.036 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:35 ntpd[16162]: Soliciting pool server 159.195.55.239 2026-03-09T21:07:35.036 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:35 ntpd[16162]: Soliciting pool server 5.45.97.204 2026-03-09T21:07:35.036 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:35 ntpd[16162]: Soliciting pool server 217.14.146.53 2026-03-09T21:07:35.076 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:35 ntpd[16113]: Soliciting pool server 78.46.87.46 2026-03-09T21:07:35.077 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:35 ntpd[16113]: Soliciting pool server 159.195.55.239 2026-03-09T21:07:35.077 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:35 ntpd[16113]: Soliciting pool server 5.45.97.204 2026-03-09T21:07:35.077 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:35 ntpd[16113]: Soliciting pool server 217.14.146.53 2026-03-09T21:07:36.036 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:36 ntpd[16162]: Soliciting pool server 129.70.132.32 2026-03-09T21:07:36.036 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:36 ntpd[16162]: Soliciting pool server 78.47.56.71 2026-03-09T21:07:36.036 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:36 ntpd[16162]: Soliciting pool server 162.19.170.154 2026-03-09T21:07:36.036 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:36 ntpd[16162]: Soliciting pool server 185.125.190.58 2026-03-09T21:07:36.076 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:36 ntpd[16113]: Soliciting pool server 78.47.56.71 2026-03-09T21:07:36.077 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:36 ntpd[16113]: Soliciting pool server 162.19.170.154 2026-03-09T21:07:36.077 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:36 ntpd[16113]: Soliciting pool server 185.125.190.58 2026-03-09T21:07:37.036 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:37 ntpd[16162]: Soliciting pool server 185.125.190.57 2026-03-09T21:07:37.036 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:37 ntpd[16162]: Soliciting pool server 148.251.5.46 2026-03-09T21:07:37.036 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:37 ntpd[16162]: Soliciting pool server 129.70.132.34 2026-03-09T21:07:37.076 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:37 ntpd[16113]: Soliciting pool server 185.125.190.57 2026-03-09T21:07:37.076 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:37 ntpd[16113]: Soliciting pool server 129.70.132.34 2026-03-09T21:07:38.076 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:38 ntpd[16113]: Soliciting pool server 91.189.91.157 2026-03-09T21:07:38.076 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:38 ntpd[16113]: Soliciting pool server 240b:4005:12b:fb00:d11d:fbb7:f895:7abe 2026-03-09T21:07:39.059 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 21:07:39 ntpd[16162]: ntpd: time slew -0.003827 s 2026-03-09T21:07:39.059 INFO:teuthology.orchestra.run.vm07.stdout:ntpd: time slew -0.003827s 2026-03-09T21:07:39.076 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:39 ntpd[16113]: Soliciting pool server 185.125.190.56 2026-03-09T21:07:39.078 INFO:teuthology.orchestra.run.vm07.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T21:07:39.078 INFO:teuthology.orchestra.run.vm07.stdout:============================================================================== 2026-03-09T21:07:39.078 INFO:teuthology.orchestra.run.vm07.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T21:07:39.078 INFO:teuthology.orchestra.run.vm07.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T21:07:39.078 INFO:teuthology.orchestra.run.vm07.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T21:07:39.078 INFO:teuthology.orchestra.run.vm07.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T21:07:39.078 INFO:teuthology.orchestra.run.vm07.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T21:07:42.099 INFO:teuthology.orchestra.run.vm10.stdout: 9 Mar 21:07:42 ntpd[16113]: ntpd: time slew -0.003085 s 2026-03-09T21:07:42.099 INFO:teuthology.orchestra.run.vm10.stdout:ntpd: time slew -0.003085s 2026-03-09T21:07:42.118 INFO:teuthology.orchestra.run.vm10.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T21:07:42.118 INFO:teuthology.orchestra.run.vm10.stdout:============================================================================== 2026-03-09T21:07:42.118 INFO:teuthology.orchestra.run.vm10.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T21:07:42.118 INFO:teuthology.orchestra.run.vm10.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T21:07:42.118 INFO:teuthology.orchestra.run.vm10.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T21:07:42.118 INFO:teuthology.orchestra.run.vm10.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T21:07:42.118 INFO:teuthology.orchestra.run.vm10.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T21:07:42.118 INFO:teuthology.run_tasks:Running task install... 2026-03-09T21:07:42.120 DEBUG:teuthology.task.install:project ceph 2026-03-09T21:07:42.120 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'extra_system_packages': {'deb': ['python3-pytest'], 'rpm': ['python3-pytest']}, 'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'}, 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-09T21:07:42.120 DEBUG:teuthology.task.install:config {'extra_system_packages': {'deb': ['python3-pytest', 'python3-xmltodict', 'python3-jmespath'], 'rpm': ['python3-pytest', 'bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'} 2026-03-09T21:07:42.120 INFO:teuthology.task.install:Using flavor: default 2026-03-09T21:07:42.123 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-09T21:07:42.123 INFO:teuthology.task.install:extra packages: [] 2026-03-09T21:07:42.123 DEBUG:teuthology.orchestra.run.vm07:> sudo apt-key list | grep Ceph 2026-03-09T21:07:42.123 DEBUG:teuthology.orchestra.run.vm10:> sudo apt-key list | grep Ceph 2026-03-09T21:07:42.156 INFO:teuthology.orchestra.run.vm07.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-09T21:07:42.174 INFO:teuthology.orchestra.run.vm07.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-09T21:07:42.174 INFO:teuthology.orchestra.run.vm07.stdout:uid [ unknown] Ceph.com (release key) 2026-03-09T21:07:42.174 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-09T21:07:42.174 INFO:teuthology.task.install.deb:Installing system (non-project) packages: python3-pytest, python3-xmltodict, python3-jmespath on remote deb x86_64 2026-03-09T21:07:42.174 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T21:07:42.197 INFO:teuthology.orchestra.run.vm10.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-09T21:07:42.215 INFO:teuthology.orchestra.run.vm10.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-09T21:07:42.215 INFO:teuthology.orchestra.run.vm10.stdout:uid [ unknown] Ceph.com (release key) 2026-03-09T21:07:42.215 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-09T21:07:42.215 INFO:teuthology.task.install.deb:Installing system (non-project) packages: python3-pytest, python3-xmltodict, python3-jmespath on remote deb x86_64 2026-03-09T21:07:42.215 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T21:07:42.800 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/ 2026-03-09T21:07:42.800 INFO:teuthology.task.install.deb:Package version is 19.2.3-678-ge911bdeb-1jammy 2026-03-09T21:07:42.870 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/ 2026-03-09T21:07:42.870 INFO:teuthology.task.install.deb:Package version is 19.2.3-678-ge911bdeb-1jammy 2026-03-09T21:07:43.352 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T21:07:43.352 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-09T21:07:43.359 DEBUG:teuthology.orchestra.run.vm07:> sudo apt-get update 2026-03-09T21:07:43.397 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-09T21:07:43.397 DEBUG:teuthology.orchestra.run.vm10:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-09T21:07:43.406 DEBUG:teuthology.orchestra.run.vm10:> sudo apt-get update 2026-03-09T21:07:43.643 INFO:teuthology.orchestra.run.vm07.stdout:Hit:1 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-09T21:07:43.683 INFO:teuthology.orchestra.run.vm07.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-09T21:07:43.701 INFO:teuthology.orchestra.run.vm10.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-09T21:07:43.718 INFO:teuthology.orchestra.run.vm07.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-09T21:07:43.733 INFO:teuthology.orchestra.run.vm10.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-09T21:07:43.754 INFO:teuthology.orchestra.run.vm07.stdout:Hit:4 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-09T21:07:43.769 INFO:teuthology.orchestra.run.vm10.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-09T21:07:44.049 INFO:teuthology.orchestra.run.vm07.stdout:Ign:5 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy InRelease 2026-03-09T21:07:44.056 INFO:teuthology.orchestra.run.vm10.stdout:Ign:4 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy InRelease 2026-03-09T21:07:44.168 INFO:teuthology.orchestra.run.vm07.stdout:Get:6 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release [7662 B] 2026-03-09T21:07:44.176 INFO:teuthology.orchestra.run.vm10.stdout:Get:5 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release [7662 B] 2026-03-09T21:07:44.185 INFO:teuthology.orchestra.run.vm10.stdout:Hit:6 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-09T21:07:44.287 INFO:teuthology.orchestra.run.vm07.stdout:Ign:7 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-09T21:07:44.296 INFO:teuthology.orchestra.run.vm10.stdout:Ign:7 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-09T21:07:44.406 INFO:teuthology.orchestra.run.vm07.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.1 kB] 2026-03-09T21:07:44.417 INFO:teuthology.orchestra.run.vm10.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.1 kB] 2026-03-09T21:07:44.481 INFO:teuthology.orchestra.run.vm07.stdout:Fetched 25.8 kB in 1s (26.5 kB/s) 2026-03-09T21:07:44.487 INFO:teuthology.orchestra.run.vm10.stdout:Fetched 25.8 kB in 1s (27.7 kB/s) 2026-03-09T21:07:45.097 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-09T21:07:45.109 DEBUG:teuthology.orchestra.run.vm10:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=19.2.3-678-ge911bdeb-1jammy cephadm=19.2.3-678-ge911bdeb-1jammy ceph-mds=19.2.3-678-ge911bdeb-1jammy ceph-mgr=19.2.3-678-ge911bdeb-1jammy ceph-common=19.2.3-678-ge911bdeb-1jammy ceph-fuse=19.2.3-678-ge911bdeb-1jammy ceph-test=19.2.3-678-ge911bdeb-1jammy ceph-volume=19.2.3-678-ge911bdeb-1jammy radosgw=19.2.3-678-ge911bdeb-1jammy python3-rados=19.2.3-678-ge911bdeb-1jammy python3-rgw=19.2.3-678-ge911bdeb-1jammy python3-cephfs=19.2.3-678-ge911bdeb-1jammy python3-rbd=19.2.3-678-ge911bdeb-1jammy libcephfs2=19.2.3-678-ge911bdeb-1jammy libcephfs-dev=19.2.3-678-ge911bdeb-1jammy librados2=19.2.3-678-ge911bdeb-1jammy librbd1=19.2.3-678-ge911bdeb-1jammy rbd-fuse=19.2.3-678-ge911bdeb-1jammy 2026-03-09T21:07:45.119 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-09T21:07:45.131 DEBUG:teuthology.orchestra.run.vm07:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=19.2.3-678-ge911bdeb-1jammy cephadm=19.2.3-678-ge911bdeb-1jammy ceph-mds=19.2.3-678-ge911bdeb-1jammy ceph-mgr=19.2.3-678-ge911bdeb-1jammy ceph-common=19.2.3-678-ge911bdeb-1jammy ceph-fuse=19.2.3-678-ge911bdeb-1jammy ceph-test=19.2.3-678-ge911bdeb-1jammy ceph-volume=19.2.3-678-ge911bdeb-1jammy radosgw=19.2.3-678-ge911bdeb-1jammy python3-rados=19.2.3-678-ge911bdeb-1jammy python3-rgw=19.2.3-678-ge911bdeb-1jammy python3-cephfs=19.2.3-678-ge911bdeb-1jammy python3-rbd=19.2.3-678-ge911bdeb-1jammy libcephfs2=19.2.3-678-ge911bdeb-1jammy libcephfs-dev=19.2.3-678-ge911bdeb-1jammy librados2=19.2.3-678-ge911bdeb-1jammy librbd1=19.2.3-678-ge911bdeb-1jammy rbd-fuse=19.2.3-678-ge911bdeb-1jammy 2026-03-09T21:07:45.142 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-09T21:07:45.164 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-09T21:07:45.310 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-09T21:07:45.310 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-09T21:07:45.334 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-09T21:07:45.334 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-09T21:07:45.412 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:07:45.412 INFO:teuthology.orchestra.run.vm07.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T21:07:45.412 INFO:teuthology.orchestra.run.vm07.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T21:07:45.412 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:07:45.412 INFO:teuthology.orchestra.run.vm07.stdout:The following additional packages will be installed: 2026-03-09T21:07:45.412 INFO:teuthology.orchestra.run.vm07.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-09T21:07:45.412 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-09T21:07:45.412 INFO:teuthology.orchestra.run.vm07.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T21:07:45.412 INFO:teuthology.orchestra.run.vm07.stdout: liboath0 libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-09T21:07:45.413 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph 2026-03-09T21:07:45.413 INFO:teuthology.orchestra.run.vm07.stdout: libthrift-0.16.0 lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T21:07:45.413 INFO:teuthology.orchestra.run.vm07.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T21:07:45.413 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T21:07:45.413 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-09T21:07:45.413 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T21:07:45.413 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T21:07:45.413 INFO:teuthology.orchestra.run.vm07.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T21:07:45.413 INFO:teuthology.orchestra.run.vm07.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-09T21:07:45.413 INFO:teuthology.orchestra.run.vm07.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-09T21:07:45.413 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-pytest python3-repoze.lru 2026-03-09T21:07:45.413 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T21:07:45.413 INFO:teuthology.orchestra.run.vm07.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T21:07:45.413 INFO:teuthology.orchestra.run.vm07.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T21:07:45.413 INFO:teuthology.orchestra.run.vm07.stdout: python3-toml python3-waitress python3-wcwidth python3-webob 2026-03-09T21:07:45.413 INFO:teuthology.orchestra.run.vm07.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T21:07:45.413 INFO:teuthology.orchestra.run.vm07.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-09T21:07:45.413 INFO:teuthology.orchestra.run.vm07.stdout:Suggested packages: 2026-03-09T21:07:45.413 INFO:teuthology.orchestra.run.vm07.stdout: python3-influxdb readline-doc python3-beaker python-mako-doc 2026-03-09T21:07:45.413 INFO:teuthology.orchestra.run.vm07.stdout: python-natsort-doc httpd-wsgi libapache2-mod-python libapache2-mod-scgi 2026-03-09T21:07:45.413 INFO:teuthology.orchestra.run.vm07.stdout: libjs-mochikit python-pecan-doc python-psutil-doc subversion 2026-03-09T21:07:45.413 INFO:teuthology.orchestra.run.vm07.stdout: python-pygments-doc ttf-bitstream-vera python-pyinotify-doc python3-dap 2026-03-09T21:07:45.413 INFO:teuthology.orchestra.run.vm07.stdout: python-sklearn-doc ipython3 python-waitress-doc python-webob-doc 2026-03-09T21:07:45.413 INFO:teuthology.orchestra.run.vm07.stdout: python-webtest-doc python-werkzeug-doc python3-watchdog gsmartcontrol 2026-03-09T21:07:45.413 INFO:teuthology.orchestra.run.vm07.stdout: smart-notifier mailx | mailutils 2026-03-09T21:07:45.413 INFO:teuthology.orchestra.run.vm07.stdout:Recommended packages: 2026-03-09T21:07:45.413 INFO:teuthology.orchestra.run.vm07.stdout: btrfs-tools 2026-03-09T21:07:45.448 INFO:teuthology.orchestra.run.vm07.stdout:The following NEW packages will be installed: 2026-03-09T21:07:45.448 INFO:teuthology.orchestra.run.vm07.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-09T21:07:45.448 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-09T21:07:45.448 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-09T21:07:45.448 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs-dev libcephfs2 libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 2026-03-09T21:07:45.448 INFO:teuthology.orchestra.run.vm07.stdout: liblua5.3-dev libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-09T21:07:45.448 INFO:teuthology.orchestra.run.vm07.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 libreadline-dev 2026-03-09T21:07:45.448 INFO:teuthology.orchestra.run.vm07.stdout: librgw2 libsqlite3-mod-ceph libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-09T21:07:45.448 INFO:teuthology.orchestra.run.vm07.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T21:07:45.448 INFO:teuthology.orchestra.run.vm07.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T21:07:45.448 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-09T21:07:45.448 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-09T21:07:45.448 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T21:07:45.448 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T21:07:45.448 INFO:teuthology.orchestra.run.vm07.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T21:07:45.448 INFO:teuthology.orchestra.run.vm07.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-09T21:07:45.448 INFO:teuthology.orchestra.run.vm07.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-09T21:07:45.448 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-pytest python3-rados python3-rbd 2026-03-09T21:07:45.448 INFO:teuthology.orchestra.run.vm07.stdout: python3-repoze.lru python3-requests-oauthlib python3-rgw python3-routes 2026-03-09T21:07:45.448 INFO:teuthology.orchestra.run.vm07.stdout: python3-rsa python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T21:07:45.448 INFO:teuthology.orchestra.run.vm07.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T21:07:45.448 INFO:teuthology.orchestra.run.vm07.stdout: python3-threadpoolctl python3-toml python3-waitress python3-wcwidth 2026-03-09T21:07:45.448 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T21:07:45.448 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse smartmontools 2026-03-09T21:07:45.448 INFO:teuthology.orchestra.run.vm07.stdout: socat unzip xmlstarlet zip 2026-03-09T21:07:45.448 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be upgraded: 2026-03-09T21:07:45.448 INFO:teuthology.orchestra.run.vm07.stdout: librados2 librbd1 2026-03-09T21:07:45.477 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:07:45.477 INFO:teuthology.orchestra.run.vm10.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T21:07:45.477 INFO:teuthology.orchestra.run.vm10.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T21:07:45.477 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:07:45.477 INFO:teuthology.orchestra.run.vm10.stdout:The following additional packages will be installed: 2026-03-09T21:07:45.477 INFO:teuthology.orchestra.run.vm10.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-09T21:07:45.477 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-09T21:07:45.477 INFO:teuthology.orchestra.run.vm10.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T21:07:45.477 INFO:teuthology.orchestra.run.vm10.stdout: liboath0 libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-09T21:07:45.477 INFO:teuthology.orchestra.run.vm10.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph 2026-03-09T21:07:45.478 INFO:teuthology.orchestra.run.vm10.stdout: libthrift-0.16.0 lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T21:07:45.478 INFO:teuthology.orchestra.run.vm10.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T21:07:45.478 INFO:teuthology.orchestra.run.vm10.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T21:07:45.478 INFO:teuthology.orchestra.run.vm10.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-09T21:07:45.478 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T21:07:45.478 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T21:07:45.478 INFO:teuthology.orchestra.run.vm10.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T21:07:45.478 INFO:teuthology.orchestra.run.vm10.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-09T21:07:45.478 INFO:teuthology.orchestra.run.vm10.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-09T21:07:45.478 INFO:teuthology.orchestra.run.vm10.stdout: python3-pyinotify python3-pytest python3-repoze.lru 2026-03-09T21:07:45.478 INFO:teuthology.orchestra.run.vm10.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T21:07:45.478 INFO:teuthology.orchestra.run.vm10.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T21:07:45.478 INFO:teuthology.orchestra.run.vm10.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T21:07:45.478 INFO:teuthology.orchestra.run.vm10.stdout: python3-toml python3-waitress python3-wcwidth python3-webob 2026-03-09T21:07:45.478 INFO:teuthology.orchestra.run.vm10.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T21:07:45.478 INFO:teuthology.orchestra.run.vm10.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-09T21:07:45.478 INFO:teuthology.orchestra.run.vm10.stdout:Suggested packages: 2026-03-09T21:07:45.478 INFO:teuthology.orchestra.run.vm10.stdout: python3-influxdb readline-doc python3-beaker python-mako-doc 2026-03-09T21:07:45.478 INFO:teuthology.orchestra.run.vm10.stdout: python-natsort-doc httpd-wsgi libapache2-mod-python libapache2-mod-scgi 2026-03-09T21:07:45.478 INFO:teuthology.orchestra.run.vm10.stdout: libjs-mochikit python-pecan-doc python-psutil-doc subversion 2026-03-09T21:07:45.478 INFO:teuthology.orchestra.run.vm10.stdout: python-pygments-doc ttf-bitstream-vera python-pyinotify-doc python3-dap 2026-03-09T21:07:45.478 INFO:teuthology.orchestra.run.vm10.stdout: python-sklearn-doc ipython3 python-waitress-doc python-webob-doc 2026-03-09T21:07:45.478 INFO:teuthology.orchestra.run.vm10.stdout: python-webtest-doc python-werkzeug-doc python3-watchdog gsmartcontrol 2026-03-09T21:07:45.478 INFO:teuthology.orchestra.run.vm10.stdout: smart-notifier mailx | mailutils 2026-03-09T21:07:45.478 INFO:teuthology.orchestra.run.vm10.stdout:Recommended packages: 2026-03-09T21:07:45.478 INFO:teuthology.orchestra.run.vm10.stdout: btrfs-tools 2026-03-09T21:07:45.515 INFO:teuthology.orchestra.run.vm10.stdout:The following NEW packages will be installed: 2026-03-09T21:07:45.515 INFO:teuthology.orchestra.run.vm10.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-09T21:07:45.515 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-09T21:07:45.515 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-09T21:07:45.515 INFO:teuthology.orchestra.run.vm10.stdout: libcephfs-dev libcephfs2 libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 2026-03-09T21:07:45.515 INFO:teuthology.orchestra.run.vm10.stdout: liblua5.3-dev libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-09T21:07:45.515 INFO:teuthology.orchestra.run.vm10.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 libreadline-dev 2026-03-09T21:07:45.516 INFO:teuthology.orchestra.run.vm10.stdout: librgw2 libsqlite3-mod-ceph libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-09T21:07:45.516 INFO:teuthology.orchestra.run.vm10.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T21:07:45.516 INFO:teuthology.orchestra.run.vm10.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T21:07:45.516 INFO:teuthology.orchestra.run.vm10.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-09T21:07:45.516 INFO:teuthology.orchestra.run.vm10.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-09T21:07:45.516 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T21:07:45.516 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T21:07:45.516 INFO:teuthology.orchestra.run.vm10.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T21:07:45.516 INFO:teuthology.orchestra.run.vm10.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-09T21:07:45.516 INFO:teuthology.orchestra.run.vm10.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-09T21:07:45.516 INFO:teuthology.orchestra.run.vm10.stdout: python3-pyinotify python3-pytest python3-rados python3-rbd 2026-03-09T21:07:45.516 INFO:teuthology.orchestra.run.vm10.stdout: python3-repoze.lru python3-requests-oauthlib python3-rgw python3-routes 2026-03-09T21:07:45.516 INFO:teuthology.orchestra.run.vm10.stdout: python3-rsa python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T21:07:45.516 INFO:teuthology.orchestra.run.vm10.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T21:07:45.516 INFO:teuthology.orchestra.run.vm10.stdout: python3-threadpoolctl python3-toml python3-waitress python3-wcwidth 2026-03-09T21:07:45.516 INFO:teuthology.orchestra.run.vm10.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T21:07:45.516 INFO:teuthology.orchestra.run.vm10.stdout: python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse smartmontools 2026-03-09T21:07:45.516 INFO:teuthology.orchestra.run.vm10.stdout: socat unzip xmlstarlet zip 2026-03-09T21:07:45.516 INFO:teuthology.orchestra.run.vm10.stdout:The following packages will be upgraded: 2026-03-09T21:07:45.517 INFO:teuthology.orchestra.run.vm10.stdout: librados2 librbd1 2026-03-09T21:07:45.651 INFO:teuthology.orchestra.run.vm07.stdout:2 upgraded, 107 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T21:07:45.651 INFO:teuthology.orchestra.run.vm07.stdout:Need to get 178 MB of archives. 2026-03-09T21:07:45.651 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 782 MB of additional disk space will be used. 2026-03-09T21:07:45.651 INFO:teuthology.orchestra.run.vm07.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-09T21:07:45.820 INFO:teuthology.orchestra.run.vm07.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-09T21:07:45.825 INFO:teuthology.orchestra.run.vm07.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-09T21:07:45.859 INFO:teuthology.orchestra.run.vm07.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-09T21:07:45.961 INFO:teuthology.orchestra.run.vm07.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-09T21:07:45.965 INFO:teuthology.orchestra.run.vm07.stdout:Get:6 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-09T21:07:45.979 INFO:teuthology.orchestra.run.vm07.stdout:Get:7 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-09T21:07:45.983 INFO:teuthology.orchestra.run.vm07.stdout:Get:8 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-09T21:07:45.984 INFO:teuthology.orchestra.run.vm07.stdout:Get:9 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-09T21:07:45.984 INFO:teuthology.orchestra.run.vm07.stdout:Get:10 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-09T21:07:45.984 INFO:teuthology.orchestra.run.vm07.stdout:Get:11 https://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-09T21:07:45.992 INFO:teuthology.orchestra.run.vm10.stdout:2 upgraded, 107 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T21:07:45.992 INFO:teuthology.orchestra.run.vm10.stdout:Need to get 178 MB of archives. 2026-03-09T21:07:45.992 INFO:teuthology.orchestra.run.vm10.stdout:After this operation, 782 MB of additional disk space will be used. 2026-03-09T21:07:45.992 INFO:teuthology.orchestra.run.vm10.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-09T21:07:45.993 INFO:teuthology.orchestra.run.vm07.stdout:Get:12 https://archive.ubuntu.com/ubuntu jammy/main amd64 libreadline-dev amd64 8.1.2-1 [166 kB] 2026-03-09T21:07:45.995 INFO:teuthology.orchestra.run.vm07.stdout:Get:13 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-dev amd64 5.3.6-1build1 [167 kB] 2026-03-09T21:07:45.997 INFO:teuthology.orchestra.run.vm07.stdout:Get:14 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua5.1 amd64 5.1.5-8.1build4 [94.6 kB] 2026-03-09T21:07:46.028 INFO:teuthology.orchestra.run.vm07.stdout:Get:15 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 19.2.3-678-ge911bdeb-1jammy [3257 kB] 2026-03-09T21:07:46.031 INFO:teuthology.orchestra.run.vm07.stdout:Get:16 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-any all 27ubuntu1 [5034 B] 2026-03-09T21:07:46.031 INFO:teuthology.orchestra.run.vm07.stdout:Get:17 https://archive.ubuntu.com/ubuntu jammy/main amd64 zip amd64 3.0-12build2 [176 kB] 2026-03-09T21:07:46.033 INFO:teuthology.orchestra.run.vm07.stdout:Get:18 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.2 [175 kB] 2026-03-09T21:07:46.034 INFO:teuthology.orchestra.run.vm07.stdout:Get:19 https://archive.ubuntu.com/ubuntu jammy/universe amd64 luarocks all 3.8.0+dfsg1-1 [140 kB] 2026-03-09T21:07:46.035 INFO:teuthology.orchestra.run.vm07.stdout:Get:20 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-09T21:07:46.036 INFO:teuthology.orchestra.run.vm07.stdout:Get:21 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-09T21:07:46.036 INFO:teuthology.orchestra.run.vm07.stdout:Get:22 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-09T21:07:46.037 INFO:teuthology.orchestra.run.vm07.stdout:Get:23 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-09T21:07:46.071 INFO:teuthology.orchestra.run.vm07.stdout:Get:24 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-09T21:07:46.071 INFO:teuthology.orchestra.run.vm07.stdout:Get:25 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-09T21:07:46.071 INFO:teuthology.orchestra.run.vm07.stdout:Get:26 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-09T21:07:46.072 INFO:teuthology.orchestra.run.vm07.stdout:Get:27 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-09T21:07:46.072 INFO:teuthology.orchestra.run.vm07.stdout:Get:28 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-09T21:07:46.106 INFO:teuthology.orchestra.run.vm07.stdout:Get:29 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-09T21:07:46.108 INFO:teuthology.orchestra.run.vm07.stdout:Get:30 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-09T21:07:46.108 INFO:teuthology.orchestra.run.vm07.stdout:Get:31 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-logutils all 0.3.3-8 [17.6 kB] 2026-03-09T21:07:46.109 INFO:teuthology.orchestra.run.vm07.stdout:Get:32 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-mako all 1.1.3+ds1-2ubuntu0.1 [60.5 kB] 2026-03-09T21:07:46.109 INFO:teuthology.orchestra.run.vm07.stdout:Get:33 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplegeneric all 0.8.1-3 [11.3 kB] 2026-03-09T21:07:46.113 INFO:teuthology.orchestra.run.vm10.stdout:Get:2 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 19.2.3-678-ge911bdeb-1jammy [3257 kB] 2026-03-09T21:07:46.142 INFO:teuthology.orchestra.run.vm07.stdout:Get:34 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-singledispatch all 3.4.0.3-3 [7320 B] 2026-03-09T21:07:46.142 INFO:teuthology.orchestra.run.vm07.stdout:Get:35 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-09T21:07:46.143 INFO:teuthology.orchestra.run.vm07.stdout:Get:36 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-waitress all 1.4.4-1.1ubuntu1.1 [47.0 kB] 2026-03-09T21:07:46.143 INFO:teuthology.orchestra.run.vm07.stdout:Get:37 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempita all 0.5.2-6ubuntu1 [15.1 kB] 2026-03-09T21:07:46.144 INFO:teuthology.orchestra.run.vm07.stdout:Get:38 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-paste all 3.5.0+dfsg1-1 [456 kB] 2026-03-09T21:07:46.178 INFO:teuthology.orchestra.run.vm07.stdout:Get:39 https://archive.ubuntu.com/ubuntu jammy/main amd64 python-pastedeploy-tpl all 2.1.1-1 [4892 B] 2026-03-09T21:07:46.178 INFO:teuthology.orchestra.run.vm07.stdout:Get:40 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastedeploy all 2.1.1-1 [26.6 kB] 2026-03-09T21:07:46.179 INFO:teuthology.orchestra.run.vm07.stdout:Get:41 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-webtest all 2.0.35-1 [28.5 kB] 2026-03-09T21:07:46.179 INFO:teuthology.orchestra.run.vm07.stdout:Get:42 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pecan all 1.3.3-4ubuntu2 [87.3 kB] 2026-03-09T21:07:46.180 INFO:teuthology.orchestra.run.vm07.stdout:Get:43 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-werkzeug all 2.0.2+dfsg1-1ubuntu0.22.04.3 [181 kB] 2026-03-09T21:07:46.214 INFO:teuthology.orchestra.run.vm07.stdout:Get:44 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-09T21:07:46.215 INFO:teuthology.orchestra.run.vm07.stdout:Get:45 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-09T21:07:46.216 INFO:teuthology.orchestra.run.vm07.stdout:Get:46 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-09T21:07:46.216 INFO:teuthology.orchestra.run.vm07.stdout:Get:47 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-09T21:07:46.217 INFO:teuthology.orchestra.run.vm07.stdout:Get:48 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-09T21:07:46.287 INFO:teuthology.orchestra.run.vm07.stdout:Get:49 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-09T21:07:46.288 INFO:teuthology.orchestra.run.vm07.stdout:Get:50 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-09T21:07:46.288 INFO:teuthology.orchestra.run.vm07.stdout:Get:51 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-09T21:07:46.316 INFO:teuthology.orchestra.run.vm07.stdout:Get:52 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-09T21:07:46.316 INFO:teuthology.orchestra.run.vm07.stdout:Get:53 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-09T21:07:46.316 INFO:teuthology.orchestra.run.vm07.stdout:Get:54 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-09T21:07:46.317 INFO:teuthology.orchestra.run.vm07.stdout:Get:55 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-09T21:07:46.317 INFO:teuthology.orchestra.run.vm07.stdout:Get:56 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-09T21:07:46.317 INFO:teuthology.orchestra.run.vm07.stdout:Get:57 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-09T21:07:46.323 INFO:teuthology.orchestra.run.vm07.stdout:Get:58 https://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-09T21:07:46.355 INFO:teuthology.orchestra.run.vm07.stdout:Get:59 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-09T21:07:46.356 INFO:teuthology.orchestra.run.vm07.stdout:Get:60 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-09T21:07:46.356 INFO:teuthology.orchestra.run.vm07.stdout:Get:61 https://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-09T21:07:46.359 INFO:teuthology.orchestra.run.vm07.stdout:Get:62 https://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-09T21:07:46.362 INFO:teuthology.orchestra.run.vm07.stdout:Get:63 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-socket amd64 3.0~rc1+git+ac3201d-6 [78.9 kB] 2026-03-09T21:07:46.362 INFO:teuthology.orchestra.run.vm07.stdout:Get:64 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-sec amd64 1.0.2-1 [37.6 kB] 2026-03-09T21:07:46.362 INFO:teuthology.orchestra.run.vm07.stdout:Get:65 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-09T21:07:46.367 INFO:teuthology.orchestra.run.vm07.stdout:Get:66 https://archive.ubuntu.com/ubuntu jammy/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB] 2026-03-09T21:07:46.367 INFO:teuthology.orchestra.run.vm07.stdout:Get:67 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-09T21:07:46.418 INFO:teuthology.orchestra.run.vm07.stdout:Get:68 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-09T21:07:46.419 INFO:teuthology.orchestra.run.vm07.stdout:Get:69 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastescript all 2.0.2-4 [54.6 kB] 2026-03-09T21:07:46.419 INFO:teuthology.orchestra.run.vm07.stdout:Get:70 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-09T21:07:46.419 INFO:teuthology.orchestra.run.vm07.stdout:Get:71 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-09T21:07:46.419 INFO:teuthology.orchestra.run.vm07.stdout:Get:72 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-09T21:07:46.420 INFO:teuthology.orchestra.run.vm07.stdout:Get:73 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-09T21:07:46.423 INFO:teuthology.orchestra.run.vm07.stdout:Get:74 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pyinotify all 0.9.6-1.3 [24.8 kB] 2026-03-09T21:07:46.438 INFO:teuthology.orchestra.run.vm07.stdout:Get:75 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-09T21:07:46.438 INFO:teuthology.orchestra.run.vm07.stdout:Get:76 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-09T21:07:46.471 INFO:teuthology.orchestra.run.vm07.stdout:Get:77 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-09T21:07:46.471 INFO:teuthology.orchestra.run.vm07.stdout:Get:78 https://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-09T21:07:46.486 INFO:teuthology.orchestra.run.vm10.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-09T21:07:46.539 INFO:teuthology.orchestra.run.vm07.stdout:Get:79 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-09T21:07:46.598 INFO:teuthology.orchestra.run.vm10.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-09T21:07:46.603 INFO:teuthology.orchestra.run.vm10.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-09T21:07:46.938 INFO:teuthology.orchestra.run.vm10.stdout:Get:6 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 19.2.3-678-ge911bdeb-1jammy [3597 kB] 2026-03-09T21:07:46.951 INFO:teuthology.orchestra.run.vm10.stdout:Get:7 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-09T21:07:46.965 INFO:teuthology.orchestra.run.vm10.stdout:Get:8 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-09T21:07:47.004 INFO:teuthology.orchestra.run.vm10.stdout:Get:9 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-09T21:07:47.019 INFO:teuthology.orchestra.run.vm10.stdout:Get:10 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-09T21:07:47.019 INFO:teuthology.orchestra.run.vm10.stdout:Get:11 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-09T21:07:47.019 INFO:teuthology.orchestra.run.vm10.stdout:Get:12 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-09T21:07:47.020 INFO:teuthology.orchestra.run.vm10.stdout:Get:13 https://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-09T21:07:47.049 INFO:teuthology.orchestra.run.vm10.stdout:Get:14 https://archive.ubuntu.com/ubuntu jammy/main amd64 libreadline-dev amd64 8.1.2-1 [166 kB] 2026-03-09T21:07:47.055 INFO:teuthology.orchestra.run.vm10.stdout:Get:15 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-dev amd64 5.3.6-1build1 [167 kB] 2026-03-09T21:07:47.062 INFO:teuthology.orchestra.run.vm10.stdout:Get:16 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 19.2.3-678-ge911bdeb-1jammy [979 kB] 2026-03-09T21:07:47.064 INFO:teuthology.orchestra.run.vm10.stdout:Get:17 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua5.1 amd64 5.1.5-8.1build4 [94.6 kB] 2026-03-09T21:07:47.076 INFO:teuthology.orchestra.run.vm10.stdout:Get:18 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 19.2.3-678-ge911bdeb-1jammy [357 kB] 2026-03-09T21:07:47.081 INFO:teuthology.orchestra.run.vm10.stdout:Get:19 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 19.2.3-678-ge911bdeb-1jammy [32.9 kB] 2026-03-09T21:07:47.082 INFO:teuthology.orchestra.run.vm10.stdout:Get:20 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 19.2.3-678-ge911bdeb-1jammy [184 kB] 2026-03-09T21:07:47.086 INFO:teuthology.orchestra.run.vm10.stdout:Get:21 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 19.2.3-678-ge911bdeb-1jammy [70.1 kB] 2026-03-09T21:07:47.087 INFO:teuthology.orchestra.run.vm10.stdout:Get:22 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 19.2.3-678-ge911bdeb-1jammy [334 kB] 2026-03-09T21:07:47.093 INFO:teuthology.orchestra.run.vm10.stdout:Get:23 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 19.2.3-678-ge911bdeb-1jammy [6935 kB] 2026-03-09T21:07:47.159 INFO:teuthology.orchestra.run.vm10.stdout:Get:24 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-any all 27ubuntu1 [5034 B] 2026-03-09T21:07:47.159 INFO:teuthology.orchestra.run.vm10.stdout:Get:25 https://archive.ubuntu.com/ubuntu jammy/main amd64 zip amd64 3.0-12build2 [176 kB] 2026-03-09T21:07:47.161 INFO:teuthology.orchestra.run.vm10.stdout:Get:26 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.2 [175 kB] 2026-03-09T21:07:47.164 INFO:teuthology.orchestra.run.vm10.stdout:Get:27 https://archive.ubuntu.com/ubuntu jammy/universe amd64 luarocks all 3.8.0+dfsg1-1 [140 kB] 2026-03-09T21:07:47.167 INFO:teuthology.orchestra.run.vm10.stdout:Get:28 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-09T21:07:47.167 INFO:teuthology.orchestra.run.vm10.stdout:Get:29 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-09T21:07:47.168 INFO:teuthology.orchestra.run.vm10.stdout:Get:30 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-09T21:07:47.169 INFO:teuthology.orchestra.run.vm10.stdout:Get:31 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-09T21:07:47.271 INFO:teuthology.orchestra.run.vm10.stdout:Get:32 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-09T21:07:47.271 INFO:teuthology.orchestra.run.vm10.stdout:Get:33 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-09T21:07:47.271 INFO:teuthology.orchestra.run.vm10.stdout:Get:34 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-09T21:07:47.272 INFO:teuthology.orchestra.run.vm10.stdout:Get:35 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-09T21:07:47.272 INFO:teuthology.orchestra.run.vm10.stdout:Get:36 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-09T21:07:47.272 INFO:teuthology.orchestra.run.vm10.stdout:Get:37 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-09T21:07:47.374 INFO:teuthology.orchestra.run.vm10.stdout:Get:38 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-09T21:07:47.375 INFO:teuthology.orchestra.run.vm10.stdout:Get:39 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-logutils all 0.3.3-8 [17.6 kB] 2026-03-09T21:07:47.375 INFO:teuthology.orchestra.run.vm10.stdout:Get:40 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-mako all 1.1.3+ds1-2ubuntu0.1 [60.5 kB] 2026-03-09T21:07:47.376 INFO:teuthology.orchestra.run.vm10.stdout:Get:41 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplegeneric all 0.8.1-3 [11.3 kB] 2026-03-09T21:07:47.413 INFO:teuthology.orchestra.run.vm10.stdout:Get:42 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 19.2.3-678-ge911bdeb-1jammy [112 kB] 2026-03-09T21:07:47.414 INFO:teuthology.orchestra.run.vm10.stdout:Get:43 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 19.2.3-678-ge911bdeb-1jammy [470 kB] 2026-03-09T21:07:47.417 INFO:teuthology.orchestra.run.vm10.stdout:Get:44 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 19.2.3-678-ge911bdeb-1jammy [26.5 MB] 2026-03-09T21:07:47.478 INFO:teuthology.orchestra.run.vm10.stdout:Get:45 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-singledispatch all 3.4.0.3-3 [7320 B] 2026-03-09T21:07:47.478 INFO:teuthology.orchestra.run.vm10.stdout:Get:46 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-09T21:07:47.479 INFO:teuthology.orchestra.run.vm10.stdout:Get:47 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-waitress all 1.4.4-1.1ubuntu1.1 [47.0 kB] 2026-03-09T21:07:47.480 INFO:teuthology.orchestra.run.vm10.stdout:Get:48 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempita all 0.5.2-6ubuntu1 [15.1 kB] 2026-03-09T21:07:47.480 INFO:teuthology.orchestra.run.vm10.stdout:Get:49 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-paste all 3.5.0+dfsg1-1 [456 kB] 2026-03-09T21:07:47.487 INFO:teuthology.orchestra.run.vm10.stdout:Get:50 https://archive.ubuntu.com/ubuntu jammy/main amd64 python-pastedeploy-tpl all 2.1.1-1 [4892 B] 2026-03-09T21:07:47.581 INFO:teuthology.orchestra.run.vm10.stdout:Get:51 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastedeploy all 2.1.1-1 [26.6 kB] 2026-03-09T21:07:47.581 INFO:teuthology.orchestra.run.vm10.stdout:Get:52 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-webtest all 2.0.35-1 [28.5 kB] 2026-03-09T21:07:47.582 INFO:teuthology.orchestra.run.vm10.stdout:Get:53 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pecan all 1.3.3-4ubuntu2 [87.3 kB] 2026-03-09T21:07:47.583 INFO:teuthology.orchestra.run.vm10.stdout:Get:54 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-werkzeug all 2.0.2+dfsg1-1ubuntu0.22.04.3 [181 kB] 2026-03-09T21:07:47.648 INFO:teuthology.orchestra.run.vm07.stdout:Get:80 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 19.2.3-678-ge911bdeb-1jammy [3597 kB] 2026-03-09T21:07:47.685 INFO:teuthology.orchestra.run.vm10.stdout:Get:55 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-09T21:07:47.686 INFO:teuthology.orchestra.run.vm10.stdout:Get:56 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-09T21:07:47.690 INFO:teuthology.orchestra.run.vm10.stdout:Get:57 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-09T21:07:47.690 INFO:teuthology.orchestra.run.vm10.stdout:Get:58 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-09T21:07:47.691 INFO:teuthology.orchestra.run.vm10.stdout:Get:59 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-09T21:07:47.725 INFO:teuthology.orchestra.run.vm10.stdout:Get:60 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-09T21:07:47.788 INFO:teuthology.orchestra.run.vm10.stdout:Get:61 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-09T21:07:47.789 INFO:teuthology.orchestra.run.vm10.stdout:Get:62 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-09T21:07:47.819 INFO:teuthology.orchestra.run.vm10.stdout:Get:63 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-09T21:07:47.820 INFO:teuthology.orchestra.run.vm10.stdout:Get:64 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-09T21:07:47.894 INFO:teuthology.orchestra.run.vm10.stdout:Get:65 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-09T21:07:47.895 INFO:teuthology.orchestra.run.vm10.stdout:Get:66 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-09T21:07:47.895 INFO:teuthology.orchestra.run.vm10.stdout:Get:67 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-09T21:07:47.895 INFO:teuthology.orchestra.run.vm10.stdout:Get:68 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-09T21:07:47.899 INFO:teuthology.orchestra.run.vm10.stdout:Get:69 https://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-09T21:07:47.903 INFO:teuthology.orchestra.run.vm10.stdout:Get:70 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-09T21:07:47.998 INFO:teuthology.orchestra.run.vm10.stdout:Get:71 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-09T21:07:47.998 INFO:teuthology.orchestra.run.vm10.stdout:Get:72 https://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-09T21:07:48.003 INFO:teuthology.orchestra.run.vm10.stdout:Get:73 https://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-09T21:07:48.011 INFO:teuthology.orchestra.run.vm10.stdout:Get:74 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-socket amd64 3.0~rc1+git+ac3201d-6 [78.9 kB] 2026-03-09T21:07:48.101 INFO:teuthology.orchestra.run.vm10.stdout:Get:75 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-sec amd64 1.0.2-1 [37.6 kB] 2026-03-09T21:07:48.102 INFO:teuthology.orchestra.run.vm10.stdout:Get:76 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-09T21:07:48.108 INFO:teuthology.orchestra.run.vm10.stdout:Get:77 https://archive.ubuntu.com/ubuntu jammy/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB] 2026-03-09T21:07:48.108 INFO:teuthology.orchestra.run.vm10.stdout:Get:78 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-09T21:07:48.112 INFO:teuthology.orchestra.run.vm10.stdout:Get:79 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-09T21:07:48.112 INFO:teuthology.orchestra.run.vm10.stdout:Get:80 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastescript all 2.0.2-4 [54.6 kB] 2026-03-09T21:07:48.205 INFO:teuthology.orchestra.run.vm10.stdout:Get:81 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-09T21:07:48.205 INFO:teuthology.orchestra.run.vm10.stdout:Get:82 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-09T21:07:48.207 INFO:teuthology.orchestra.run.vm10.stdout:Get:83 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-09T21:07:48.208 INFO:teuthology.orchestra.run.vm10.stdout:Get:84 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-09T21:07:48.308 INFO:teuthology.orchestra.run.vm10.stdout:Get:85 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pyinotify all 0.9.6-1.3 [24.8 kB] 2026-03-09T21:07:48.308 INFO:teuthology.orchestra.run.vm10.stdout:Get:86 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-09T21:07:48.309 INFO:teuthology.orchestra.run.vm10.stdout:Get:87 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-09T21:07:48.311 INFO:teuthology.orchestra.run.vm10.stdout:Get:88 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-09T21:07:48.312 INFO:teuthology.orchestra.run.vm10.stdout:Get:89 https://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-09T21:07:48.339 INFO:teuthology.orchestra.run.vm10.stdout:Get:90 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-09T21:07:48.395 INFO:teuthology.orchestra.run.vm10.stdout:Get:91 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 19.2.3-678-ge911bdeb-1jammy [5178 kB] 2026-03-09T21:07:48.571 INFO:teuthology.orchestra.run.vm07.stdout:Get:81 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 19.2.3-678-ge911bdeb-1jammy [979 kB] 2026-03-09T21:07:48.608 INFO:teuthology.orchestra.run.vm10.stdout:Get:92 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 19.2.3-678-ge911bdeb-1jammy [248 kB] 2026-03-09T21:07:48.610 INFO:teuthology.orchestra.run.vm10.stdout:Get:93 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 19.2.3-678-ge911bdeb-1jammy [125 kB] 2026-03-09T21:07:48.611 INFO:teuthology.orchestra.run.vm10.stdout:Get:94 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 19.2.3-678-ge911bdeb-1jammy [1081 kB] 2026-03-09T21:07:48.634 INFO:teuthology.orchestra.run.vm10.stdout:Get:95 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 19.2.3-678-ge911bdeb-1jammy [6239 kB] 2026-03-09T21:07:48.871 INFO:teuthology.orchestra.run.vm10.stdout:Get:96 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 19.2.3-678-ge911bdeb-1jammy [23.0 MB] 2026-03-09T21:07:48.885 INFO:teuthology.orchestra.run.vm07.stdout:Get:82 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 19.2.3-678-ge911bdeb-1jammy [357 kB] 2026-03-09T21:07:48.997 INFO:teuthology.orchestra.run.vm07.stdout:Get:83 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 19.2.3-678-ge911bdeb-1jammy [32.9 kB] 2026-03-09T21:07:48.997 INFO:teuthology.orchestra.run.vm07.stdout:Get:84 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 19.2.3-678-ge911bdeb-1jammy [184 kB] 2026-03-09T21:07:49.004 INFO:teuthology.orchestra.run.vm07.stdout:Get:85 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 19.2.3-678-ge911bdeb-1jammy [70.1 kB] 2026-03-09T21:07:49.010 INFO:teuthology.orchestra.run.vm07.stdout:Get:86 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 19.2.3-678-ge911bdeb-1jammy [334 kB] 2026-03-09T21:07:49.116 INFO:teuthology.orchestra.run.vm07.stdout:Get:87 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 19.2.3-678-ge911bdeb-1jammy [6935 kB] 2026-03-09T21:07:49.904 INFO:teuthology.orchestra.run.vm10.stdout:Get:97 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 19.2.3-678-ge911bdeb-1jammy [14.2 kB] 2026-03-09T21:07:49.905 INFO:teuthology.orchestra.run.vm10.stdout:Get:98 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 19.2.3-678-ge911bdeb-1jammy [1173 kB] 2026-03-09T21:07:49.990 INFO:teuthology.orchestra.run.vm10.stdout:Get:99 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 19.2.3-678-ge911bdeb-1jammy [2503 kB] 2026-03-09T21:07:50.104 INFO:teuthology.orchestra.run.vm10.stdout:Get:100 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 19.2.3-678-ge911bdeb-1jammy [798 kB] 2026-03-09T21:07:50.110 INFO:teuthology.orchestra.run.vm10.stdout:Get:101 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 19.2.3-678-ge911bdeb-1jammy [157 kB] 2026-03-09T21:07:50.111 INFO:teuthology.orchestra.run.vm10.stdout:Get:102 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 19.2.3-678-ge911bdeb-1jammy [2396 kB] 2026-03-09T21:07:50.224 INFO:teuthology.orchestra.run.vm10.stdout:Get:103 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 19.2.3-678-ge911bdeb-1jammy [8625 kB] 2026-03-09T21:07:50.620 INFO:teuthology.orchestra.run.vm10.stdout:Get:104 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 19.2.3-678-ge911bdeb-1jammy [14.3 kB] 2026-03-09T21:07:50.620 INFO:teuthology.orchestra.run.vm10.stdout:Get:105 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 19.2.3-678-ge911bdeb-1jammy [52.1 MB] 2026-03-09T21:07:50.824 INFO:teuthology.orchestra.run.vm07.stdout:Get:88 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 19.2.3-678-ge911bdeb-1jammy [112 kB] 2026-03-09T21:07:50.827 INFO:teuthology.orchestra.run.vm07.stdout:Get:89 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 19.2.3-678-ge911bdeb-1jammy [470 kB] 2026-03-09T21:07:50.939 INFO:teuthology.orchestra.run.vm07.stdout:Get:90 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 19.2.3-678-ge911bdeb-1jammy [26.5 MB] 2026-03-09T21:07:53.389 INFO:teuthology.orchestra.run.vm10.stdout:Get:106 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 19.2.3-678-ge911bdeb-1jammy [135 kB] 2026-03-09T21:07:53.390 INFO:teuthology.orchestra.run.vm10.stdout:Get:107 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 19.2.3-678-ge911bdeb-1jammy [41.0 kB] 2026-03-09T21:07:53.390 INFO:teuthology.orchestra.run.vm10.stdout:Get:108 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 19.2.3-678-ge911bdeb-1jammy [13.7 MB] 2026-03-09T21:07:53.994 INFO:teuthology.orchestra.run.vm10.stdout:Get:109 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 19.2.3-678-ge911bdeb-1jammy [92.2 kB] 2026-03-09T21:07:54.329 INFO:teuthology.orchestra.run.vm10.stdout:Fetched 178 MB in 8s (21.0 MB/s) 2026-03-09T21:07:54.554 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-09T21:07:54.594 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 111717 files and directories currently installed.) 2026-03-09T21:07:54.596 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../000-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-09T21:07:54.598 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T21:07:54.621 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-09T21:07:54.622 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../001-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-09T21:07:54.623 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T21:07:54.639 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-09T21:07:54.644 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../002-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-09T21:07:54.645 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T21:07:54.667 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-09T21:07:54.673 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../003-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T21:07:54.680 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T21:07:54.735 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-09T21:07:54.741 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../004-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T21:07:54.742 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T21:07:54.763 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-09T21:07:54.768 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../005-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T21:07:54.769 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T21:07:54.799 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-09T21:07:54.803 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../006-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-09T21:07:54.804 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T21:07:54.831 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../007-librbd1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:07:54.833 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking librbd1 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-09T21:07:54.928 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../008-librados2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:07:54.930 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking librados2 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-09T21:07:54.995 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package libnbd0. 2026-03-09T21:07:55.000 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../009-libnbd0_1.10.5-1_amd64.deb ... 2026-03-09T21:07:55.000 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-09T21:07:55.015 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package libcephfs2. 2026-03-09T21:07:55.020 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../010-libcephfs2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:07:55.020 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:07:55.046 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-rados. 2026-03-09T21:07:55.051 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../011-python3-rados_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:07:55.051 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:07:55.069 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-09T21:07:55.075 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../012-python3-ceph-argparse_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T21:07:55.076 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:07:55.092 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-cephfs. 2026-03-09T21:07:55.098 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../013-python3-cephfs_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:07:55.098 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:07:55.115 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-09T21:07:55.121 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../014-python3-ceph-common_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T21:07:55.122 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:07:55.141 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-09T21:07:55.148 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../015-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-09T21:07:55.149 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T21:07:55.166 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-prettytable. 2026-03-09T21:07:55.172 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../016-python3-prettytable_2.5.0-2_all.deb ... 2026-03-09T21:07:55.173 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-09T21:07:55.260 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-rbd. 2026-03-09T21:07:55.267 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../017-python3-rbd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:07:55.307 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:07:55.474 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-09T21:07:55.480 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../018-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-09T21:07:55.481 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T21:07:55.502 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package libreadline-dev:amd64. 2026-03-09T21:07:55.507 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../019-libreadline-dev_8.1.2-1_amd64.deb ... 2026-03-09T21:07:55.508 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T21:07:55.526 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package liblua5.3-dev:amd64. 2026-03-09T21:07:55.532 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../020-liblua5.3-dev_5.3.6-1build1_amd64.deb ... 2026-03-09T21:07:55.532 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T21:07:55.553 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package lua5.1. 2026-03-09T21:07:55.559 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../021-lua5.1_5.1.5-8.1build4_amd64.deb ... 2026-03-09T21:07:55.559 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking lua5.1 (5.1.5-8.1build4) ... 2026-03-09T21:07:55.581 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package lua-any. 2026-03-09T21:07:55.587 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../022-lua-any_27ubuntu1_all.deb ... 2026-03-09T21:07:55.588 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking lua-any (27ubuntu1) ... 2026-03-09T21:07:55.603 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package zip. 2026-03-09T21:07:55.609 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../023-zip_3.0-12build2_amd64.deb ... 2026-03-09T21:07:55.610 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking zip (3.0-12build2) ... 2026-03-09T21:07:55.629 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package unzip. 2026-03-09T21:07:55.635 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../024-unzip_6.0-26ubuntu3.2_amd64.deb ... 2026-03-09T21:07:55.636 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking unzip (6.0-26ubuntu3.2) ... 2026-03-09T21:07:55.657 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package luarocks. 2026-03-09T21:07:55.663 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../025-luarocks_3.8.0+dfsg1-1_all.deb ... 2026-03-09T21:07:55.663 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking luarocks (3.8.0+dfsg1-1) ... 2026-03-09T21:07:55.718 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package librgw2. 2026-03-09T21:07:55.724 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../026-librgw2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:07:55.725 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:07:55.912 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-rgw. 2026-03-09T21:07:55.913 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../027-python3-rgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:07:55.914 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:07:55.933 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-09T21:07:55.939 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../028-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-09T21:07:55.940 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T21:07:55.957 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package libradosstriper1. 2026-03-09T21:07:55.963 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../029-libradosstriper1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:07:55.964 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:07:55.987 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package ceph-common. 2026-03-09T21:07:55.992 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../030-ceph-common_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:07:55.993 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:07:56.535 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package ceph-base. 2026-03-09T21:07:56.540 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../031-ceph-base_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:07:56.544 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:07:56.683 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-09T21:07:56.689 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../032-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-09T21:07:56.690 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-09T21:07:56.704 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-cheroot. 2026-03-09T21:07:56.710 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../033-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-09T21:07:56.711 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T21:07:56.729 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-09T21:07:56.735 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../034-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-09T21:07:56.735 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-09T21:07:56.751 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-09T21:07:56.758 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../035-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-09T21:07:56.759 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-09T21:07:56.775 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-09T21:07:56.781 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../036-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-09T21:07:56.781 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-09T21:07:56.796 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-tempora. 2026-03-09T21:07:56.801 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../037-python3-tempora_4.1.2-1_all.deb ... 2026-03-09T21:07:56.801 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-09T21:07:56.818 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-portend. 2026-03-09T21:07:56.823 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../038-python3-portend_3.0.0-1_all.deb ... 2026-03-09T21:07:56.824 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-09T21:07:56.842 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-09T21:07:56.848 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../039-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-09T21:07:56.849 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-09T21:07:56.864 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-09T21:07:56.870 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../040-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-09T21:07:56.870 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-09T21:07:56.903 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-natsort. 2026-03-09T21:07:56.908 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../041-python3-natsort_8.0.2-1_all.deb ... 2026-03-09T21:07:56.909 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-09T21:07:56.928 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-logutils. 2026-03-09T21:07:56.935 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../042-python3-logutils_0.3.3-8_all.deb ... 2026-03-09T21:07:56.936 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-logutils (0.3.3-8) ... 2026-03-09T21:07:56.953 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-mako. 2026-03-09T21:07:56.959 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../043-python3-mako_1.1.3+ds1-2ubuntu0.1_all.deb ... 2026-03-09T21:07:56.960 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T21:07:56.982 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-simplegeneric. 2026-03-09T21:07:56.987 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../044-python3-simplegeneric_0.8.1-3_all.deb ... 2026-03-09T21:07:56.988 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-simplegeneric (0.8.1-3) ... 2026-03-09T21:07:57.008 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-singledispatch. 2026-03-09T21:07:57.013 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../045-python3-singledispatch_3.4.0.3-3_all.deb ... 2026-03-09T21:07:57.014 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-singledispatch (3.4.0.3-3) ... 2026-03-09T21:07:57.029 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-webob. 2026-03-09T21:07:57.033 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../046-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-09T21:07:57.034 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T21:07:57.056 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-waitress. 2026-03-09T21:07:57.062 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../047-python3-waitress_1.4.4-1.1ubuntu1.1_all.deb ... 2026-03-09T21:07:57.064 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T21:07:57.082 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-tempita. 2026-03-09T21:07:57.088 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../048-python3-tempita_0.5.2-6ubuntu1_all.deb ... 2026-03-09T21:07:57.089 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T21:07:57.105 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-paste. 2026-03-09T21:07:57.105 INFO:teuthology.orchestra.run.vm07.stdout:Get:91 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 19.2.3-678-ge911bdeb-1jammy [5178 kB] 2026-03-09T21:07:57.111 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../049-python3-paste_3.5.0+dfsg1-1_all.deb ... 2026-03-09T21:07:57.112 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T21:07:57.148 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python-pastedeploy-tpl. 2026-03-09T21:07:57.154 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../050-python-pastedeploy-tpl_2.1.1-1_all.deb ... 2026-03-09T21:07:57.155 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T21:07:57.170 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-pastedeploy. 2026-03-09T21:07:57.176 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../051-python3-pastedeploy_2.1.1-1_all.deb ... 2026-03-09T21:07:57.178 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-pastedeploy (2.1.1-1) ... 2026-03-09T21:07:57.197 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-webtest. 2026-03-09T21:07:57.202 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../052-python3-webtest_2.0.35-1_all.deb ... 2026-03-09T21:07:57.203 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-webtest (2.0.35-1) ... 2026-03-09T21:07:57.220 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-pecan. 2026-03-09T21:07:57.226 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../053-python3-pecan_1.3.3-4ubuntu2_all.deb ... 2026-03-09T21:07:57.227 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T21:07:57.259 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-werkzeug. 2026-03-09T21:07:57.264 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../054-python3-werkzeug_2.0.2+dfsg1-1ubuntu0.22.04.3_all.deb ... 2026-03-09T21:07:57.265 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T21:07:57.290 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-09T21:07:57.296 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../055-ceph-mgr-modules-core_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T21:07:57.297 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:07:57.339 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-09T21:07:57.345 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../056-libsqlite3-mod-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:07:57.346 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:07:57.364 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package ceph-mgr. 2026-03-09T21:07:57.370 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../057-ceph-mgr_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:07:57.371 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:07:57.408 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package ceph-mon. 2026-03-09T21:07:57.414 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../058-ceph-mon_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:07:57.414 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:07:57.530 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-09T21:07:57.535 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../059-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-09T21:07:57.536 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T21:07:57.555 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package ceph-osd. 2026-03-09T21:07:57.561 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../060-ceph-osd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:07:57.562 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:07:57.973 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package ceph. 2026-03-09T21:07:57.979 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../061-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:07:57.980 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:07:57.997 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package ceph-fuse. 2026-03-09T21:07:58.002 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../062-ceph-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:07:58.003 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:07:58.041 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package ceph-mds. 2026-03-09T21:07:58.047 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../063-ceph-mds_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:07:58.047 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:07:58.116 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package cephadm. 2026-03-09T21:07:58.124 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../064-cephadm_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:07:58.125 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:07:58.146 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-09T21:07:58.152 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../065-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-09T21:07:58.153 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T21:07:58.217 INFO:teuthology.orchestra.run.vm07.stdout:Get:92 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 19.2.3-678-ge911bdeb-1jammy [248 kB] 2026-03-09T21:07:58.258 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-09T21:07:58.271 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../066-ceph-mgr-cephadm_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T21:07:58.311 INFO:teuthology.orchestra.run.vm07.stdout:Get:93 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 19.2.3-678-ge911bdeb-1jammy [125 kB] 2026-03-09T21:07:58.387 INFO:teuthology.orchestra.run.vm07.stdout:Get:94 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 19.2.3-678-ge911bdeb-1jammy [1081 kB] 2026-03-09T21:07:58.405 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:07:58.432 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-09T21:07:58.438 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../067-python3-repoze.lru_0.7-2_all.deb ... 2026-03-09T21:07:58.438 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-09T21:07:58.454 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-routes. 2026-03-09T21:07:58.460 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../068-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-09T21:07:58.461 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T21:07:58.487 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-09T21:07:58.493 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../069-ceph-mgr-dashboard_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T21:07:58.493 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:07:58.540 INFO:teuthology.orchestra.run.vm07.stdout:Get:95 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 19.2.3-678-ge911bdeb-1jammy [6239 kB] 2026-03-09T21:07:59.028 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-09T21:07:59.034 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../070-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-09T21:07:59.035 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T21:07:59.106 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-joblib. 2026-03-09T21:07:59.112 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../071-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-09T21:07:59.113 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T21:07:59.150 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-09T21:07:59.156 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../072-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-09T21:07:59.157 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-09T21:07:59.176 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-sklearn. 2026-03-09T21:07:59.183 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../073-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-09T21:07:59.184 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T21:07:59.329 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-09T21:07:59.336 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../074-ceph-mgr-diskprediction-local_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T21:07:59.336 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:07:59.676 INFO:teuthology.orchestra.run.vm07.stdout:Get:96 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 19.2.3-678-ge911bdeb-1jammy [23.0 MB] 2026-03-09T21:07:59.796 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-cachetools. 2026-03-09T21:07:59.802 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../075-python3-cachetools_5.0.0-1_all.deb ... 2026-03-09T21:07:59.803 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-09T21:07:59.819 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-rsa. 2026-03-09T21:07:59.823 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../076-python3-rsa_4.8-1_all.deb ... 2026-03-09T21:07:59.824 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-09T21:07:59.841 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-google-auth. 2026-03-09T21:07:59.846 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../077-python3-google-auth_1.5.1-3_all.deb ... 2026-03-09T21:07:59.847 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-09T21:07:59.869 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-09T21:07:59.876 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../078-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-09T21:07:59.877 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T21:07:59.893 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-websocket. 2026-03-09T21:07:59.899 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../079-python3-websocket_1.2.3-1_all.deb ... 2026-03-09T21:07:59.900 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-09T21:07:59.919 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-09T21:07:59.925 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../080-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-09T21:07:59.938 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T21:08:00.123 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-09T21:08:00.128 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../081-ceph-mgr-k8sevents_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T21:08:00.129 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:00.146 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-09T21:08:00.152 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../082-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-09T21:08:00.153 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T21:08:00.172 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-09T21:08:00.179 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../083-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-09T21:08:00.179 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T21:08:00.196 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package jq. 2026-03-09T21:08:00.201 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../084-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-09T21:08:00.203 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-09T21:08:00.220 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package socat. 2026-03-09T21:08:00.226 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../085-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-09T21:08:00.227 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-09T21:08:00.252 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package xmlstarlet. 2026-03-09T21:08:00.257 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../086-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-09T21:08:00.258 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-09T21:08:00.306 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package ceph-test. 2026-03-09T21:08:00.312 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../087-ceph-test_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:08:00.312 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:01.376 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package ceph-volume. 2026-03-09T21:08:01.383 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../088-ceph-volume_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T21:08:01.384 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:01.416 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-09T21:08:01.422 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../089-libcephfs-dev_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:08:01.423 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:01.442 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package lua-socket:amd64. 2026-03-09T21:08:01.448 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../090-lua-socket_3.0~rc1+git+ac3201d-6_amd64.deb ... 2026-03-09T21:08:01.535 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T21:08:01.688 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package lua-sec:amd64. 2026-03-09T21:08:01.694 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../091-lua-sec_1.0.2-1_amd64.deb ... 2026-03-09T21:08:01.695 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking lua-sec:amd64 (1.0.2-1) ... 2026-03-09T21:08:01.715 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package nvme-cli. 2026-03-09T21:08:01.722 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../092-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-09T21:08:01.723 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T21:08:01.765 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package pkg-config. 2026-03-09T21:08:01.772 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../093-pkg-config_0.29.2-1ubuntu3_amd64.deb ... 2026-03-09T21:08:01.773 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T21:08:01.790 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-09T21:08:01.796 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../094-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-09T21:08:01.797 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T21:08:01.856 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-09T21:08:01.863 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../095-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-09T21:08:01.864 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-09T21:08:01.880 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-pastescript. 2026-03-09T21:08:01.885 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../096-python3-pastescript_2.0.2-4_all.deb ... 2026-03-09T21:08:01.886 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-pastescript (2.0.2-4) ... 2026-03-09T21:08:01.907 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-pluggy. 2026-03-09T21:08:01.912 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../097-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-09T21:08:01.913 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-09T21:08:01.934 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-psutil. 2026-03-09T21:08:01.941 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../098-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-09T21:08:01.942 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-09T21:08:01.972 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-py. 2026-03-09T21:08:01.977 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../099-python3-py_1.10.0-1_all.deb ... 2026-03-09T21:08:01.978 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-09T21:08:02.004 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-pygments. 2026-03-09T21:08:02.010 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../100-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-09T21:08:02.011 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-09T21:08:02.082 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-pyinotify. 2026-03-09T21:08:02.088 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../101-python3-pyinotify_0.9.6-1.3_all.deb ... 2026-03-09T21:08:02.089 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-pyinotify (0.9.6-1.3) ... 2026-03-09T21:08:02.109 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-toml. 2026-03-09T21:08:02.115 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../102-python3-toml_0.10.2-1_all.deb ... 2026-03-09T21:08:02.116 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-09T21:08:02.134 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-pytest. 2026-03-09T21:08:02.140 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../103-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-09T21:08:02.141 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-09T21:08:02.171 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-simplejson. 2026-03-09T21:08:02.177 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../104-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-09T21:08:02.178 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-09T21:08:02.200 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-09T21:08:02.206 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../105-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-09T21:08:02.207 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-09T21:08:02.377 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package radosgw. 2026-03-09T21:08:02.380 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../106-radosgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:08:02.381 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:02.653 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package rbd-fuse. 2026-03-09T21:08:02.656 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../107-rbd-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:08:02.657 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:02.674 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package smartmontools. 2026-03-09T21:08:02.680 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../108-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-09T21:08:02.688 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T21:08:02.732 INFO:teuthology.orchestra.run.vm10.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T21:08:02.917 INFO:teuthology.orchestra.run.vm07.stdout:Get:97 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 19.2.3-678-ge911bdeb-1jammy [14.2 kB] 2026-03-09T21:08:02.917 INFO:teuthology.orchestra.run.vm07.stdout:Get:98 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 19.2.3-678-ge911bdeb-1jammy [1173 kB] 2026-03-09T21:08:02.957 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-09T21:08:02.957 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-09T21:08:03.033 INFO:teuthology.orchestra.run.vm07.stdout:Get:99 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 19.2.3-678-ge911bdeb-1jammy [2503 kB] 2026-03-09T21:08:03.287 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-09T21:08:03.346 INFO:teuthology.orchestra.run.vm07.stdout:Get:100 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 19.2.3-678-ge911bdeb-1jammy [798 kB] 2026-03-09T21:08:03.354 INFO:teuthology.orchestra.run.vm10.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T21:08:03.357 INFO:teuthology.orchestra.run.vm10.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T21:08:03.370 INFO:teuthology.orchestra.run.vm07.stdout:Get:101 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 19.2.3-678-ge911bdeb-1jammy [157 kB] 2026-03-09T21:08:03.411 INFO:teuthology.orchestra.run.vm07.stdout:Get:102 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 19.2.3-678-ge911bdeb-1jammy [2396 kB] 2026-03-09T21:08:03.419 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-09T21:08:03.656 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-09T21:08:03.682 INFO:teuthology.orchestra.run.vm07.stdout:Get:103 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 19.2.3-678-ge911bdeb-1jammy [8625 kB] 2026-03-09T21:08:04.071 INFO:teuthology.orchestra.run.vm10.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-09T21:08:04.077 INFO:teuthology.orchestra.run.vm10.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-09T21:08:04.083 INFO:teuthology.orchestra.run.vm10.stdout:Setting up cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:04.125 INFO:teuthology.orchestra.run.vm10.stdout:Adding system user cephadm....done 2026-03-09T21:08:04.132 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T21:08:04.204 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-09T21:08:04.266 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T21:08:04.269 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-09T21:08:04.334 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-09T21:08:04.403 INFO:teuthology.orchestra.run.vm10.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T21:08:04.406 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-09T21:08:04.471 INFO:teuthology.orchestra.run.vm07.stdout:Get:104 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 19.2.3-678-ge911bdeb-1jammy [14.3 kB] 2026-03-09T21:08:04.471 INFO:teuthology.orchestra.run.vm07.stdout:Get:105 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 19.2.3-678-ge911bdeb-1jammy [52.1 MB] 2026-03-09T21:08:04.577 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T21:08:04.814 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-09T21:08:04.884 INFO:teuthology.orchestra.run.vm10.stdout:Setting up unzip (6.0-26ubuntu3.2) ... 2026-03-09T21:08:04.893 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-pyinotify (0.9.6-1.3) ... 2026-03-09T21:08:04.964 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-09T21:08:05.030 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:05.099 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T21:08:05.101 INFO:teuthology.orchestra.run.vm10.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-09T21:08:05.104 INFO:teuthology.orchestra.run.vm10.stdout:Setting up lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T21:08:05.107 INFO:teuthology.orchestra.run.vm10.stdout:Setting up libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T21:08:05.109 INFO:teuthology.orchestra.run.vm10.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T21:08:05.112 INFO:teuthology.orchestra.run.vm10.stdout:Setting up lua5.1 (5.1.5-8.1build4) ... 2026-03-09T21:08:05.116 INFO:teuthology.orchestra.run.vm10.stdout:update-alternatives: using /usr/bin/lua5.1 to provide /usr/bin/lua (lua-interpreter) in auto mode 2026-03-09T21:08:05.118 INFO:teuthology.orchestra.run.vm10.stdout:update-alternatives: using /usr/bin/luac5.1 to provide /usr/bin/luac (lua-compiler) in auto mode 2026-03-09T21:08:05.120 INFO:teuthology.orchestra.run.vm10.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T21:08:05.122 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-09T21:08:05.241 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-09T21:08:05.316 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T21:08:05.390 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-09T21:08:05.470 INFO:teuthology.orchestra.run.vm10.stdout:Setting up zip (3.0-12build2) ... 2026-03-09T21:08:05.473 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-09T21:08:05.764 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T21:08:05.835 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T21:08:05.838 INFO:teuthology.orchestra.run.vm10.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-09T21:08:05.840 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T21:08:05.951 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T21:08:06.100 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T21:08:06.243 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T21:08:06.334 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T21:08:06.448 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-09T21:08:06.516 INFO:teuthology.orchestra.run.vm10.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-09T21:08:06.518 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:06.607 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T21:08:07.162 INFO:teuthology.orchestra.run.vm10.stdout:Setting up pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T21:08:07.186 INFO:teuthology.orchestra.run.vm10.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T21:08:07.191 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-09T21:08:07.264 INFO:teuthology.orchestra.run.vm10.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T21:08:07.266 INFO:teuthology.orchestra.run.vm10.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-09T21:08:07.268 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-09T21:08:07.335 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-09T21:08:07.399 INFO:teuthology.orchestra.run.vm10.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T21:08:07.402 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-09T21:08:07.475 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-singledispatch (3.4.0.3-3) ... 2026-03-09T21:08:07.543 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-logutils (0.3.3-8) ... 2026-03-09T21:08:07.616 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-09T21:08:07.685 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-simplegeneric (0.8.1-3) ... 2026-03-09T21:08:07.751 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-09T21:08:07.824 INFO:teuthology.orchestra.run.vm10.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T21:08:07.827 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-09T21:08:07.830 INFO:teuthology.orchestra.run.vm07.stdout:Get:106 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 19.2.3-678-ge911bdeb-1jammy [135 kB] 2026-03-09T21:08:07.830 INFO:teuthology.orchestra.run.vm07.stdout:Get:107 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 19.2.3-678-ge911bdeb-1jammy [41.0 kB] 2026-03-09T21:08:07.831 INFO:teuthology.orchestra.run.vm07.stdout:Get:108 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 19.2.3-678-ge911bdeb-1jammy [13.7 MB] 2026-03-09T21:08:07.909 INFO:teuthology.orchestra.run.vm10.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T21:08:07.912 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T21:08:07.989 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T21:08:08.072 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T21:08:08.312 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-09T21:08:08.379 INFO:teuthology.orchestra.run.vm10.stdout:Setting up liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T21:08:08.381 INFO:teuthology.orchestra.run.vm10.stdout:Setting up lua-sec:amd64 (1.0.2-1) ... 2026-03-09T21:08:08.383 INFO:teuthology.orchestra.run.vm10.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T21:08:08.385 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-09T21:08:08.414 INFO:teuthology.orchestra.run.vm07.stdout:Get:109 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 19.2.3-678-ge911bdeb-1jammy [92.2 kB] 2026-03-09T21:08:08.529 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-pastedeploy (2.1.1-1) ... 2026-03-09T21:08:08.606 INFO:teuthology.orchestra.run.vm10.stdout:Setting up lua-any (27ubuntu1) ... 2026-03-09T21:08:08.608 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-09T21:08:08.680 INFO:teuthology.orchestra.run.vm10.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T21:08:08.683 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-09T21:08:08.766 INFO:teuthology.orchestra.run.vm10.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-09T21:08:08.768 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-webtest (2.0.35-1) ... 2026-03-09T21:08:08.779 INFO:teuthology.orchestra.run.vm07.stdout:Fetched 178 MB in 23s (7744 kB/s) 2026-03-09T21:08:08.815 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-09T21:08:08.844 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-09T21:08:08.856 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 111717 files and directories currently installed.) 2026-03-09T21:08:08.858 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../000-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-09T21:08:08.860 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T21:08:08.881 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-09T21:08:08.888 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../001-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-09T21:08:08.889 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T21:08:08.907 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-09T21:08:08.913 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../002-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-09T21:08:08.914 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T21:08:08.942 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-09T21:08:08.947 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../003-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T21:08:08.952 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T21:08:08.991 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-pastescript (2.0.2-4) ... 2026-03-09T21:08:09.004 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-09T21:08:09.010 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../004-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T21:08:09.010 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T21:08:09.029 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-09T21:08:09.034 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../005-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T21:08:09.035 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T21:08:09.063 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-09T21:08:09.070 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../006-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-09T21:08:09.071 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T21:08:09.075 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T21:08:09.097 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../007-librbd1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:08:09.100 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking librbd1 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-09T21:08:09.206 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../008-librados2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:08:09.209 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking librados2 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-09T21:08:09.209 INFO:teuthology.orchestra.run.vm10.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T21:08:09.212 INFO:teuthology.orchestra.run.vm10.stdout:Setting up librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:09.214 INFO:teuthology.orchestra.run.vm10.stdout:Setting up libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:09.216 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T21:08:09.281 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libnbd0. 2026-03-09T21:08:09.287 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../009-libnbd0_1.10.5-1_amd64.deb ... 2026-03-09T21:08:09.288 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-09T21:08:09.306 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libcephfs2. 2026-03-09T21:08:09.313 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../010-libcephfs2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:08:09.314 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:09.345 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-rados. 2026-03-09T21:08:09.350 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../011-python3-rados_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:08:09.351 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:09.375 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-09T21:08:09.381 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../012-python3-ceph-argparse_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T21:08:09.381 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:09.397 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-cephfs. 2026-03-09T21:08:09.402 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../013-python3-cephfs_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:08:09.403 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:09.424 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-09T21:08:09.430 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../014-python3-ceph-common_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T21:08:09.431 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:09.456 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-09T21:08:09.466 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../015-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-09T21:08:09.467 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T21:08:09.493 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-prettytable. 2026-03-09T21:08:09.502 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../016-python3-prettytable_2.5.0-2_all.deb ... 2026-03-09T21:08:09.570 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-09T21:08:09.736 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-rbd. 2026-03-09T21:08:09.744 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../017-python3-rbd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:08:09.744 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:09.766 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-09T21:08:09.771 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../018-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-09T21:08:09.772 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T21:08:09.795 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libreadline-dev:amd64. 2026-03-09T21:08:09.800 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../019-libreadline-dev_8.1.2-1_amd64.deb ... 2026-03-09T21:08:09.801 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T21:08:09.822 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package liblua5.3-dev:amd64. 2026-03-09T21:08:09.829 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../020-liblua5.3-dev_5.3.6-1build1_amd64.deb ... 2026-03-09T21:08:09.830 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T21:08:09.855 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package lua5.1. 2026-03-09T21:08:09.861 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../021-lua5.1_5.1.5-8.1build4_amd64.deb ... 2026-03-09T21:08:09.862 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking lua5.1 (5.1.5-8.1build4) ... 2026-03-09T21:08:09.891 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package lua-any. 2026-03-09T21:08:09.899 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../022-lua-any_27ubuntu1_all.deb ... 2026-03-09T21:08:10.088 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking lua-any (27ubuntu1) ... 2026-03-09T21:08:10.105 INFO:teuthology.orchestra.run.vm10.stdout:Setting up luarocks (3.8.0+dfsg1-1) ... 2026-03-09T21:08:10.125 INFO:teuthology.orchestra.run.vm10.stdout:Setting up libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:10.128 INFO:teuthology.orchestra.run.vm10.stdout:Setting up libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:10.129 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package zip. 2026-03-09T21:08:10.130 INFO:teuthology.orchestra.run.vm10.stdout:Setting up librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:10.133 INFO:teuthology.orchestra.run.vm10.stdout:Setting up ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:10.134 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../023-zip_3.0-12build2_amd64.deb ... 2026-03-09T21:08:10.135 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking zip (3.0-12build2) ... 2026-03-09T21:08:10.136 INFO:teuthology.orchestra.run.vm10.stdout:Setting up ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:10.152 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package unzip. 2026-03-09T21:08:10.158 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../024-unzip_6.0-26ubuntu3.2_amd64.deb ... 2026-03-09T21:08:10.159 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking unzip (6.0-26ubuntu3.2) ... 2026-03-09T21:08:10.179 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package luarocks. 2026-03-09T21:08:10.184 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../025-luarocks_3.8.0+dfsg1-1_all.deb ... 2026-03-09T21:08:10.185 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking luarocks (3.8.0+dfsg1-1) ... 2026-03-09T21:08:10.199 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-09T21:08:10.199 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-09T21:08:10.236 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package librgw2. 2026-03-09T21:08:10.241 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../026-librgw2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:08:10.243 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:10.443 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-rgw. 2026-03-09T21:08:10.448 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../027-python3-rgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:08:10.449 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:10.468 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-09T21:08:10.474 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../028-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-09T21:08:10.475 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T21:08:10.493 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libradosstriper1. 2026-03-09T21:08:10.498 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../029-libradosstriper1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:08:10.499 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:10.523 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-common. 2026-03-09T21:08:10.527 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../030-ceph-common_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:08:10.528 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:10.551 INFO:teuthology.orchestra.run.vm10.stdout:Setting up libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:10.553 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:10.556 INFO:teuthology.orchestra.run.vm10.stdout:Setting up librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:10.559 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:10.561 INFO:teuthology.orchestra.run.vm10.stdout:Setting up rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:10.564 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:10.566 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:10.569 INFO:teuthology.orchestra.run.vm10.stdout:Setting up ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:10.604 INFO:teuthology.orchestra.run.vm10.stdout:Adding group ceph....done 2026-03-09T21:08:10.644 INFO:teuthology.orchestra.run.vm10.stdout:Adding system user ceph....done 2026-03-09T21:08:10.654 INFO:teuthology.orchestra.run.vm10.stdout:Setting system user ceph properties....done 2026-03-09T21:08:10.658 INFO:teuthology.orchestra.run.vm10.stdout:chown: cannot access '/var/log/ceph/*.log*': No such file or directory 2026-03-09T21:08:10.723 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target. 2026-03-09T21:08:10.934 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-09T21:08:11.085 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-base. 2026-03-09T21:08:11.091 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../031-ceph-base_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:08:11.095 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:11.219 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-09T21:08:11.225 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../032-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-09T21:08:11.226 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-09T21:08:11.241 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-cheroot. 2026-03-09T21:08:11.247 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../033-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-09T21:08:11.248 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T21:08:11.267 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-09T21:08:11.273 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../034-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-09T21:08:11.274 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-09T21:08:11.289 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-09T21:08:11.295 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../035-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-09T21:08:11.296 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-09T21:08:11.297 INFO:teuthology.orchestra.run.vm10.stdout:Setting up ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:11.300 INFO:teuthology.orchestra.run.vm10.stdout:Setting up radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:11.311 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-09T21:08:11.317 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../036-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-09T21:08:11.318 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-09T21:08:11.332 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-tempora. 2026-03-09T21:08:11.337 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../037-python3-tempora_4.1.2-1_all.deb ... 2026-03-09T21:08:11.338 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-09T21:08:11.353 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-portend. 2026-03-09T21:08:11.358 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../038-python3-portend_3.0.0-1_all.deb ... 2026-03-09T21:08:11.359 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-09T21:08:11.373 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-09T21:08:11.378 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../039-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-09T21:08:11.379 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-09T21:08:11.394 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-09T21:08:11.401 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../040-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-09T21:08:11.402 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-09T21:08:11.431 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-natsort. 2026-03-09T21:08:11.436 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../041-python3-natsort_8.0.2-1_all.deb ... 2026-03-09T21:08:11.437 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-09T21:08:11.454 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-logutils. 2026-03-09T21:08:11.459 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../042-python3-logutils_0.3.3-8_all.deb ... 2026-03-09T21:08:11.460 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-logutils (0.3.3-8) ... 2026-03-09T21:08:11.478 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-mako. 2026-03-09T21:08:11.484 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../043-python3-mako_1.1.3+ds1-2ubuntu0.1_all.deb ... 2026-03-09T21:08:11.485 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T21:08:11.506 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-simplegeneric. 2026-03-09T21:08:11.512 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../044-python3-simplegeneric_0.8.1-3_all.deb ... 2026-03-09T21:08:11.513 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-simplegeneric (0.8.1-3) ... 2026-03-09T21:08:11.527 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-singledispatch. 2026-03-09T21:08:11.533 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../045-python3-singledispatch_3.4.0.3-3_all.deb ... 2026-03-09T21:08:11.534 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-singledispatch (3.4.0.3-3) ... 2026-03-09T21:08:11.548 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-webob. 2026-03-09T21:08:11.552 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-09T21:08:11.552 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-09T21:08:11.553 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../046-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-09T21:08:11.554 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T21:08:11.574 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-waitress. 2026-03-09T21:08:11.579 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../047-python3-waitress_1.4.4-1.1ubuntu1.1_all.deb ... 2026-03-09T21:08:11.582 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T21:08:11.601 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-tempita. 2026-03-09T21:08:11.607 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../048-python3-tempita_0.5.2-6ubuntu1_all.deb ... 2026-03-09T21:08:11.608 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T21:08:11.625 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-paste. 2026-03-09T21:08:11.632 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../049-python3-paste_3.5.0+dfsg1-1_all.deb ... 2026-03-09T21:08:11.633 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T21:08:11.675 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python-pastedeploy-tpl. 2026-03-09T21:08:11.681 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../050-python-pastedeploy-tpl_2.1.1-1_all.deb ... 2026-03-09T21:08:11.683 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T21:08:11.699 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-pastedeploy. 2026-03-09T21:08:11.705 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../051-python3-pastedeploy_2.1.1-1_all.deb ... 2026-03-09T21:08:11.706 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-pastedeploy (2.1.1-1) ... 2026-03-09T21:08:11.724 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-webtest. 2026-03-09T21:08:11.730 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../052-python3-webtest_2.0.35-1_all.deb ... 2026-03-09T21:08:11.731 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-webtest (2.0.35-1) ... 2026-03-09T21:08:11.749 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-pecan. 2026-03-09T21:08:11.755 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../053-python3-pecan_1.3.3-4ubuntu2_all.deb ... 2026-03-09T21:08:11.756 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T21:08:11.791 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-werkzeug. 2026-03-09T21:08:11.797 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../054-python3-werkzeug_2.0.2+dfsg1-1ubuntu0.22.04.3_all.deb ... 2026-03-09T21:08:11.798 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T21:08:11.824 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-09T21:08:11.832 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../055-ceph-mgr-modules-core_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T21:08:11.833 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:11.875 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-09T21:08:11.880 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../056-libsqlite3-mod-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:08:11.881 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:11.898 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-mgr. 2026-03-09T21:08:11.904 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../057-ceph-mgr_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:08:11.905 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:11.928 INFO:teuthology.orchestra.run.vm10.stdout:Setting up ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:11.940 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-mon. 2026-03-09T21:08:11.946 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../058-ceph-mon_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:08:11.947 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:12.026 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-09T21:08:12.115 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-09T21:08:12.121 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../059-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-09T21:08:12.122 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T21:08:12.145 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-osd. 2026-03-09T21:08:12.148 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../060-ceph-osd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:08:12.149 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:12.383 INFO:teuthology.orchestra.run.vm10.stdout:Setting up ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:12.457 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-09T21:08:12.457 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-09T21:08:12.561 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph. 2026-03-09T21:08:12.563 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../061-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:08:12.564 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:12.583 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-fuse. 2026-03-09T21:08:12.589 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../062-ceph-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:08:12.590 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:12.625 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-mds. 2026-03-09T21:08:12.630 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../063-ceph-mds_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:08:12.631 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:12.697 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package cephadm. 2026-03-09T21:08:12.702 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../064-cephadm_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:08:12.714 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:12.765 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-09T21:08:12.769 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../065-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-09T21:08:12.770 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T21:08:12.848 INFO:teuthology.orchestra.run.vm10.stdout:Setting up ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:12.880 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-09T21:08:12.880 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../066-ceph-mgr-cephadm_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T21:08:12.881 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:12.905 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-09T21:08:12.909 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../067-python3-repoze.lru_0.7-2_all.deb ... 2026-03-09T21:08:12.910 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-09T21:08:12.925 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-09T21:08:12.925 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-09T21:08:12.929 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-routes. 2026-03-09T21:08:12.932 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../068-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-09T21:08:12.933 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T21:08:12.957 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-09T21:08:12.961 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../069-ceph-mgr-dashboard_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T21:08:12.962 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:13.380 INFO:teuthology.orchestra.run.vm10.stdout:Setting up ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:13.530 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-09T21:08:13.530 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-09T21:08:13.658 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-09T21:08:13.661 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../070-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-09T21:08:13.662 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T21:08:13.761 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-joblib. 2026-03-09T21:08:13.761 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../071-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-09T21:08:13.762 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T21:08:13.795 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-09T21:08:13.798 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../072-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-09T21:08:13.799 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-09T21:08:13.813 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-sklearn. 2026-03-09T21:08:13.817 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../073-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-09T21:08:13.818 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T21:08:13.943 INFO:teuthology.orchestra.run.vm10.stdout:Setting up ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:13.960 INFO:teuthology.orchestra.run.vm10.stdout:Setting up ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:13.973 INFO:teuthology.orchestra.run.vm10.stdout:Setting up ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:13.998 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-09T21:08:14.003 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../074-ceph-mgr-diskprediction-local_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T21:08:14.004 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:14.033 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-09T21:08:14.033 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-09T21:08:14.359 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-cachetools. 2026-03-09T21:08:14.365 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../075-python3-cachetools_5.0.0-1_all.deb ... 2026-03-09T21:08:14.366 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-09T21:08:14.384 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-rsa. 2026-03-09T21:08:14.390 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../076-python3-rsa_4.8-1_all.deb ... 2026-03-09T21:08:14.391 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-09T21:08:14.413 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-google-auth. 2026-03-09T21:08:14.418 INFO:teuthology.orchestra.run.vm10.stdout:Setting up ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:14.420 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../077-python3-google-auth_1.5.1-3_all.deb ... 2026-03-09T21:08:14.421 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-09T21:08:14.432 INFO:teuthology.orchestra.run.vm10.stdout:Setting up ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:14.435 INFO:teuthology.orchestra.run.vm10.stdout:Setting up ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:14.440 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-09T21:08:14.445 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../078-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-09T21:08:14.446 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T21:08:14.449 INFO:teuthology.orchestra.run.vm10.stdout:Setting up ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:14.463 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-websocket. 2026-03-09T21:08:14.470 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../079-python3-websocket_1.2.3-1_all.deb ... 2026-03-09T21:08:14.470 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-09T21:08:14.696 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-09T21:08:14.704 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T21:08:14.708 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-09T21:08:14.715 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../080-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-09T21:08:14.721 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T21:08:14.728 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T21:08:14.806 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for install-info (6.8-4build1) ... 2026-03-09T21:08:14.912 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-09T21:08:14.919 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../081-ceph-mgr-k8sevents_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T21:08:14.920 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:14.938 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-09T21:08:14.945 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../082-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-09T21:08:14.946 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T21:08:14.965 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-09T21:08:14.972 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../083-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-09T21:08:14.973 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T21:08:14.990 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package jq. 2026-03-09T21:08:14.996 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../084-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-09T21:08:14.997 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-09T21:08:15.018 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package socat. 2026-03-09T21:08:15.018 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../085-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-09T21:08:15.019 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-09T21:08:15.046 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package xmlstarlet. 2026-03-09T21:08:15.050 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../086-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-09T21:08:15.051 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-09T21:08:15.100 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-test. 2026-03-09T21:08:15.106 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../087-ceph-test_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:08:15.107 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:15.286 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:08:15.286 INFO:teuthology.orchestra.run.vm10.stdout:Running kernel seems to be up-to-date. 2026-03-09T21:08:15.286 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:08:15.286 INFO:teuthology.orchestra.run.vm10.stdout:Services to be restarted: 2026-03-09T21:08:15.293 INFO:teuthology.orchestra.run.vm10.stdout: systemctl restart packagekit.service 2026-03-09T21:08:15.296 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:08:15.296 INFO:teuthology.orchestra.run.vm10.stdout:Service restarts being deferred: 2026-03-09T21:08:15.296 INFO:teuthology.orchestra.run.vm10.stdout: systemctl restart networkd-dispatcher.service 2026-03-09T21:08:15.296 INFO:teuthology.orchestra.run.vm10.stdout: systemctl restart unattended-upgrades.service 2026-03-09T21:08:15.296 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:08:15.297 INFO:teuthology.orchestra.run.vm10.stdout:No containers need to be restarted. 2026-03-09T21:08:15.297 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:08:15.297 INFO:teuthology.orchestra.run.vm10.stdout:No user sessions are running outdated binaries. 2026-03-09T21:08:15.297 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:08:15.297 INFO:teuthology.orchestra.run.vm10.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-09T21:08:16.118 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-volume. 2026-03-09T21:08:16.124 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../088-ceph-volume_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T21:08:16.124 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:16.155 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-09T21:08:16.161 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../089-libcephfs-dev_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:08:16.162 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:16.180 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package lua-socket:amd64. 2026-03-09T21:08:16.185 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:08:16.186 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../090-lua-socket_3.0~rc1+git+ac3201d-6_amd64.deb ... 2026-03-09T21:08:16.187 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T21:08:16.188 DEBUG:teuthology.orchestra.run.vm10:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install python3-pytest python3-xmltodict python3-jmespath 2026-03-09T21:08:16.212 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package lua-sec:amd64. 2026-03-09T21:08:16.217 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../091-lua-sec_1.0.2-1_amd64.deb ... 2026-03-09T21:08:16.218 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking lua-sec:amd64 (1.0.2-1) ... 2026-03-09T21:08:16.238 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package nvme-cli. 2026-03-09T21:08:16.242 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../092-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-09T21:08:16.243 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T21:08:16.260 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-09T21:08:16.287 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package pkg-config. 2026-03-09T21:08:16.293 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../093-pkg-config_0.29.2-1ubuntu3_amd64.deb ... 2026-03-09T21:08:16.293 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T21:08:16.309 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-09T21:08:16.314 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../094-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-09T21:08:16.315 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T21:08:16.367 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-09T21:08:16.373 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../095-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-09T21:08:16.373 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-09T21:08:16.390 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-pastescript. 2026-03-09T21:08:16.396 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../096-python3-pastescript_2.0.2-4_all.deb ... 2026-03-09T21:08:16.396 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-pastescript (2.0.2-4) ... 2026-03-09T21:08:16.425 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-pluggy. 2026-03-09T21:08:16.430 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../097-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-09T21:08:16.431 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-09T21:08:16.453 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-psutil. 2026-03-09T21:08:16.461 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../098-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-09T21:08:16.462 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-09T21:08:16.465 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-09T21:08:16.466 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-09T21:08:16.491 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-py. 2026-03-09T21:08:16.496 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../099-python3-py_1.10.0-1_all.deb ... 2026-03-09T21:08:16.497 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-09T21:08:16.528 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-pygments. 2026-03-09T21:08:16.534 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../100-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-09T21:08:16.534 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-09T21:08:16.629 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-pyinotify. 2026-03-09T21:08:16.637 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../101-python3-pyinotify_0.9.6-1.3_all.deb ... 2026-03-09T21:08:16.637 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-pyinotify (0.9.6-1.3) ... 2026-03-09T21:08:16.651 INFO:teuthology.orchestra.run.vm10.stdout:python3-pytest is already the newest version (6.2.5-1ubuntu2). 2026-03-09T21:08:16.651 INFO:teuthology.orchestra.run.vm10.stdout:python3-pytest set to manually installed. 2026-03-09T21:08:16.651 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:08:16.651 INFO:teuthology.orchestra.run.vm10.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T21:08:16.652 INFO:teuthology.orchestra.run.vm10.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T21:08:16.652 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:08:16.656 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-toml. 2026-03-09T21:08:16.663 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../102-python3-toml_0.10.2-1_all.deb ... 2026-03-09T21:08:16.664 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-09T21:08:16.670 INFO:teuthology.orchestra.run.vm10.stdout:The following NEW packages will be installed: 2026-03-09T21:08:16.670 INFO:teuthology.orchestra.run.vm10.stdout: python3-jmespath python3-xmltodict 2026-03-09T21:08:16.680 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-pytest. 2026-03-09T21:08:16.686 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../103-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-09T21:08:16.687 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-09T21:08:16.716 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-simplejson. 2026-03-09T21:08:16.722 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../104-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-09T21:08:16.724 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-09T21:08:16.747 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-09T21:08:16.753 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../105-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-09T21:08:16.753 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-09T21:08:17.082 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package radosgw. 2026-03-09T21:08:17.085 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../106-radosgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:08:17.086 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:17.125 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 2 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T21:08:17.125 INFO:teuthology.orchestra.run.vm10.stdout:Need to get 34.3 kB of archives. 2026-03-09T21:08:17.125 INFO:teuthology.orchestra.run.vm10.stdout:After this operation, 146 kB of additional disk space will be used. 2026-03-09T21:08:17.125 INFO:teuthology.orchestra.run.vm10.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-09T21:08:17.341 INFO:teuthology.orchestra.run.vm10.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-09T21:08:17.342 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package rbd-fuse. 2026-03-09T21:08:17.348 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../107-rbd-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T21:08:17.349 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:17.368 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package smartmontools. 2026-03-09T21:08:17.374 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../108-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-09T21:08:17.382 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T21:08:17.426 INFO:teuthology.orchestra.run.vm07.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T21:08:17.533 INFO:teuthology.orchestra.run.vm10.stdout:Fetched 34.3 kB in 1s (50.8 kB/s) 2026-03-09T21:08:17.546 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-jmespath. 2026-03-09T21:08:17.572 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118577 files and directories currently installed.) 2026-03-09T21:08:17.573 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-09T21:08:17.574 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-09T21:08:17.591 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-09T21:08:17.596 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-09T21:08:17.597 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-09T21:08:17.624 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-09T21:08:17.691 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-09T21:08:17.693 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-09T21:08:17.693 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-09T21:08:18.030 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:08:18.030 INFO:teuthology.orchestra.run.vm10.stdout:Running kernel seems to be up-to-date. 2026-03-09T21:08:18.030 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:08:18.030 INFO:teuthology.orchestra.run.vm10.stdout:Services to be restarted: 2026-03-09T21:08:18.035 INFO:teuthology.orchestra.run.vm10.stdout: systemctl restart packagekit.service 2026-03-09T21:08:18.037 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:08:18.038 INFO:teuthology.orchestra.run.vm10.stdout:Service restarts being deferred: 2026-03-09T21:08:18.038 INFO:teuthology.orchestra.run.vm10.stdout: systemctl restart networkd-dispatcher.service 2026-03-09T21:08:18.038 INFO:teuthology.orchestra.run.vm10.stdout: systemctl restart unattended-upgrades.service 2026-03-09T21:08:18.038 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:08:18.038 INFO:teuthology.orchestra.run.vm10.stdout:No containers need to be restarted. 2026-03-09T21:08:18.038 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:08:18.038 INFO:teuthology.orchestra.run.vm10.stdout:No user sessions are running outdated binaries. 2026-03-09T21:08:18.038 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:08:18.038 INFO:teuthology.orchestra.run.vm10.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-09T21:08:18.204 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-09T21:08:18.279 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T21:08:18.281 INFO:teuthology.orchestra.run.vm07.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T21:08:18.346 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-09T21:08:18.591 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-09T21:08:18.923 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:08:18.927 DEBUG:teuthology.parallel:result is None 2026-03-09T21:08:18.970 INFO:teuthology.orchestra.run.vm07.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-09T21:08:18.976 INFO:teuthology.orchestra.run.vm07.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-09T21:08:18.977 INFO:teuthology.orchestra.run.vm07.stdout:Setting up cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:19.017 INFO:teuthology.orchestra.run.vm07.stdout:Adding system user cephadm....done 2026-03-09T21:08:19.025 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T21:08:19.098 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-09T21:08:19.162 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T21:08:19.164 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-09T21:08:19.230 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-09T21:08:19.346 INFO:teuthology.orchestra.run.vm07.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T21:08:19.471 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-09T21:08:19.611 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T21:08:19.738 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-09T21:08:19.810 INFO:teuthology.orchestra.run.vm07.stdout:Setting up unzip (6.0-26ubuntu3.2) ... 2026-03-09T21:08:19.819 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-pyinotify (0.9.6-1.3) ... 2026-03-09T21:08:19.887 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-09T21:08:19.953 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:20.019 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T21:08:20.021 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-09T21:08:20.024 INFO:teuthology.orchestra.run.vm07.stdout:Setting up lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T21:08:20.027 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T21:08:20.029 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T21:08:20.031 INFO:teuthology.orchestra.run.vm07.stdout:Setting up lua5.1 (5.1.5-8.1build4) ... 2026-03-09T21:08:20.036 INFO:teuthology.orchestra.run.vm07.stdout:update-alternatives: using /usr/bin/lua5.1 to provide /usr/bin/lua (lua-interpreter) in auto mode 2026-03-09T21:08:20.038 INFO:teuthology.orchestra.run.vm07.stdout:update-alternatives: using /usr/bin/luac5.1 to provide /usr/bin/luac (lua-compiler) in auto mode 2026-03-09T21:08:20.040 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T21:08:20.042 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-09T21:08:20.163 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-09T21:08:20.234 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T21:08:20.308 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-09T21:08:20.386 INFO:teuthology.orchestra.run.vm07.stdout:Setting up zip (3.0-12build2) ... 2026-03-09T21:08:20.388 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-09T21:08:20.663 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T21:08:20.730 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T21:08:20.731 INFO:teuthology.orchestra.run.vm07.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-09T21:08:20.734 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T21:08:20.839 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T21:08:20.976 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T21:08:21.114 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T21:08:21.207 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T21:08:21.324 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-09T21:08:21.390 INFO:teuthology.orchestra.run.vm07.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-09T21:08:21.392 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:21.483 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T21:08:22.022 INFO:teuthology.orchestra.run.vm07.stdout:Setting up pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T21:08:22.042 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T21:08:22.047 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-09T21:08:22.118 INFO:teuthology.orchestra.run.vm07.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T21:08:22.119 INFO:teuthology.orchestra.run.vm07.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-09T21:08:22.121 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-09T21:08:22.193 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-09T21:08:22.258 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T21:08:22.260 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-09T21:08:22.333 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-singledispatch (3.4.0.3-3) ... 2026-03-09T21:08:22.404 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-logutils (0.3.3-8) ... 2026-03-09T21:08:22.473 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-09T21:08:22.538 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-simplegeneric (0.8.1-3) ... 2026-03-09T21:08:22.604 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-09T21:08:22.680 INFO:teuthology.orchestra.run.vm07.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T21:08:22.682 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-09T21:08:22.761 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T21:08:22.763 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T21:08:22.831 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T21:08:22.917 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T21:08:23.011 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-09T21:08:23.080 INFO:teuthology.orchestra.run.vm07.stdout:Setting up liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T21:08:23.083 INFO:teuthology.orchestra.run.vm07.stdout:Setting up lua-sec:amd64 (1.0.2-1) ... 2026-03-09T21:08:23.085 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T21:08:23.087 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-09T21:08:23.221 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-pastedeploy (2.1.1-1) ... 2026-03-09T21:08:23.291 INFO:teuthology.orchestra.run.vm07.stdout:Setting up lua-any (27ubuntu1) ... 2026-03-09T21:08:23.293 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-09T21:08:23.356 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T21:08:23.358 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-09T21:08:23.436 INFO:teuthology.orchestra.run.vm07.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-09T21:08:23.437 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-webtest (2.0.35-1) ... 2026-03-09T21:08:23.515 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-09T21:08:23.645 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-pastescript (2.0.2-4) ... 2026-03-09T21:08:23.734 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T21:08:23.851 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T21:08:23.853 INFO:teuthology.orchestra.run.vm07.stdout:Setting up librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:23.855 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:23.857 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T21:08:24.609 INFO:teuthology.orchestra.run.vm07.stdout:Setting up luarocks (3.8.0+dfsg1-1) ... 2026-03-09T21:08:24.717 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:24.719 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:24.722 INFO:teuthology.orchestra.run.vm07.stdout:Setting up librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:24.724 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:24.727 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:24.788 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-09T21:08:24.789 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-09T21:08:25.142 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:25.145 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:25.147 INFO:teuthology.orchestra.run.vm07.stdout:Setting up librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:25.149 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:25.151 INFO:teuthology.orchestra.run.vm07.stdout:Setting up rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:25.154 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:25.157 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:25.159 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:25.196 INFO:teuthology.orchestra.run.vm07.stdout:Adding group ceph....done 2026-03-09T21:08:25.236 INFO:teuthology.orchestra.run.vm07.stdout:Adding system user ceph....done 2026-03-09T21:08:25.245 INFO:teuthology.orchestra.run.vm07.stdout:Setting system user ceph properties....done 2026-03-09T21:08:25.249 INFO:teuthology.orchestra.run.vm07.stdout:chown: cannot access '/var/log/ceph/*.log*': No such file or directory 2026-03-09T21:08:25.315 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target. 2026-03-09T21:08:25.553 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-09T21:08:25.911 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:25.913 INFO:teuthology.orchestra.run.vm07.stdout:Setting up radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:26.142 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-09T21:08:26.142 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-09T21:08:26.471 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:26.550 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-09T21:08:26.970 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:27.114 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-09T21:08:27.115 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-09T21:08:27.514 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:27.573 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-09T21:08:27.573 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-09T21:08:27.951 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:28.022 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-09T21:08:28.022 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-09T21:08:28.387 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:28.395 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:28.409 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:28.465 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-09T21:08:28.465 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-09T21:08:28.842 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:28.854 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:28.856 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:28.869 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:08:28.981 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-09T21:08:28.988 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T21:08:29.002 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T21:08:29.080 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for install-info (6.8-4build1) ... 2026-03-09T21:08:29.534 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:08:29.534 INFO:teuthology.orchestra.run.vm07.stdout:Running kernel seems to be up-to-date. 2026-03-09T21:08:29.534 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:08:29.534 INFO:teuthology.orchestra.run.vm07.stdout:Services to be restarted: 2026-03-09T21:08:29.540 INFO:teuthology.orchestra.run.vm07.stdout: systemctl restart packagekit.service 2026-03-09T21:08:29.542 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:08:29.543 INFO:teuthology.orchestra.run.vm07.stdout:Service restarts being deferred: 2026-03-09T21:08:29.543 INFO:teuthology.orchestra.run.vm07.stdout: systemctl restart networkd-dispatcher.service 2026-03-09T21:08:29.543 INFO:teuthology.orchestra.run.vm07.stdout: systemctl restart unattended-upgrades.service 2026-03-09T21:08:29.543 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:08:29.543 INFO:teuthology.orchestra.run.vm07.stdout:No containers need to be restarted. 2026-03-09T21:08:29.543 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:08:29.543 INFO:teuthology.orchestra.run.vm07.stdout:No user sessions are running outdated binaries. 2026-03-09T21:08:29.543 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:08:29.543 INFO:teuthology.orchestra.run.vm07.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-09T21:08:30.453 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:08:30.456 DEBUG:teuthology.orchestra.run.vm07:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install python3-pytest python3-xmltodict python3-jmespath 2026-03-09T21:08:30.534 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-09T21:08:30.750 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-09T21:08:30.750 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-09T21:08:30.935 INFO:teuthology.orchestra.run.vm07.stdout:python3-pytest is already the newest version (6.2.5-1ubuntu2). 2026-03-09T21:08:30.935 INFO:teuthology.orchestra.run.vm07.stdout:python3-pytest set to manually installed. 2026-03-09T21:08:30.935 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:08:30.935 INFO:teuthology.orchestra.run.vm07.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T21:08:30.935 INFO:teuthology.orchestra.run.vm07.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T21:08:30.935 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:08:30.953 INFO:teuthology.orchestra.run.vm07.stdout:The following NEW packages will be installed: 2026-03-09T21:08:30.953 INFO:teuthology.orchestra.run.vm07.stdout: python3-jmespath python3-xmltodict 2026-03-09T21:08:31.421 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 2 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T21:08:31.421 INFO:teuthology.orchestra.run.vm07.stdout:Need to get 34.3 kB of archives. 2026-03-09T21:08:31.421 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 146 kB of additional disk space will be used. 2026-03-09T21:08:31.421 INFO:teuthology.orchestra.run.vm07.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-09T21:08:31.645 INFO:teuthology.orchestra.run.vm07.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-09T21:08:31.878 INFO:teuthology.orchestra.run.vm07.stdout:Fetched 34.3 kB in 1s (49.4 kB/s) 2026-03-09T21:08:31.892 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-jmespath. 2026-03-09T21:08:31.916 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118577 files and directories currently installed.) 2026-03-09T21:08:31.918 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-09T21:08:31.918 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-09T21:08:31.937 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-09T21:08:31.944 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-09T21:08:31.945 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-09T21:08:31.978 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-09T21:08:32.060 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-09T21:08:32.549 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:08:32.549 INFO:teuthology.orchestra.run.vm07.stdout:Running kernel seems to be up-to-date. 2026-03-09T21:08:32.549 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:08:32.549 INFO:teuthology.orchestra.run.vm07.stdout:Services to be restarted: 2026-03-09T21:08:32.554 INFO:teuthology.orchestra.run.vm07.stdout: systemctl restart packagekit.service 2026-03-09T21:08:32.557 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:08:32.557 INFO:teuthology.orchestra.run.vm07.stdout:Service restarts being deferred: 2026-03-09T21:08:32.557 INFO:teuthology.orchestra.run.vm07.stdout: systemctl restart networkd-dispatcher.service 2026-03-09T21:08:32.557 INFO:teuthology.orchestra.run.vm07.stdout: systemctl restart unattended-upgrades.service 2026-03-09T21:08:32.557 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:08:32.557 INFO:teuthology.orchestra.run.vm07.stdout:No containers need to be restarted. 2026-03-09T21:08:32.557 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:08:32.557 INFO:teuthology.orchestra.run.vm07.stdout:No user sessions are running outdated binaries. 2026-03-09T21:08:32.557 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:08:32.557 INFO:teuthology.orchestra.run.vm07.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-09T21:08:33.618 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:08:33.621 DEBUG:teuthology.parallel:result is None 2026-03-09T21:08:33.621 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T21:08:34.298 DEBUG:teuthology.orchestra.run.vm07:> dpkg-query -W -f '${Version}' ceph 2026-03-09T21:08:34.306 INFO:teuthology.orchestra.run.vm07.stdout:19.2.3-678-ge911bdeb-1jammy 2026-03-09T21:08:34.306 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678-ge911bdeb-1jammy 2026-03-09T21:08:34.306 INFO:teuthology.task.install:The correct ceph version 19.2.3-678-ge911bdeb-1jammy is installed. 2026-03-09T21:08:34.307 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T21:08:34.980 DEBUG:teuthology.orchestra.run.vm10:> dpkg-query -W -f '${Version}' ceph 2026-03-09T21:08:34.990 INFO:teuthology.orchestra.run.vm10.stdout:19.2.3-678-ge911bdeb-1jammy 2026-03-09T21:08:34.990 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678-ge911bdeb-1jammy 2026-03-09T21:08:34.990 INFO:teuthology.task.install:The correct ceph version 19.2.3-678-ge911bdeb-1jammy is installed. 2026-03-09T21:08:34.991 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-09T21:08:34.991 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T21:08:34.991 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-09T21:08:35.000 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-09T21:08:35.000 DEBUG:teuthology.orchestra.run.vm10:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-09T21:08:35.041 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-09T21:08:35.042 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T21:08:35.042 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/usr/bin/daemon-helper 2026-03-09T21:08:35.049 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-09T21:08:35.097 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-09T21:08:35.097 DEBUG:teuthology.orchestra.run.vm10:> sudo dd of=/usr/bin/daemon-helper 2026-03-09T21:08:35.105 DEBUG:teuthology.orchestra.run.vm10:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-09T21:08:35.157 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-09T21:08:35.157 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T21:08:35.157 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-09T21:08:35.165 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-09T21:08:35.212 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-09T21:08:35.212 DEBUG:teuthology.orchestra.run.vm10:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-09T21:08:35.220 DEBUG:teuthology.orchestra.run.vm10:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-09T21:08:35.269 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-09T21:08:35.270 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T21:08:35.270 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/usr/bin/stdin-killer 2026-03-09T21:08:35.278 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-09T21:08:35.328 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-09T21:08:35.328 DEBUG:teuthology.orchestra.run.vm10:> sudo dd of=/usr/bin/stdin-killer 2026-03-09T21:08:35.335 DEBUG:teuthology.orchestra.run.vm10:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-09T21:08:35.384 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-09T21:08:35.447 INFO:tasks.cephadm:Config: {'conf': {'mgr': {'debug mgr': 20, 'debug ms': 1}, 'global': {'mon election default strategy': 3, 'ms bind msgr1': False, 'ms bind msgr2': True, 'ms type': 'async'}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000, 'osd shutdown pgref assert': True}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'but it is still running', 'overall HEALTH_', '\\(OSDMAP_FLAGS\\)', '\\(PG_', '\\(OSD_', '\\(OBJECT_', '\\(POOL_APP_NOT_ENABLED\\)'], 'log-only-match': ['CEPHADM_'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'cephadm_mode': 'root'} 2026-03-09T21:08:35.447 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T21:08:35.447 INFO:tasks.cephadm:Cluster fsid is 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:08:35.447 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-09T21:08:35.447 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '192.168.123.107', 'mon.c': '[v2:192.168.123.107:3301,v1:192.168.123.107:6790]', 'mon.b': '192.168.123.110'} 2026-03-09T21:08:35.447 INFO:tasks.cephadm:First mon is mon.a on vm07 2026-03-09T21:08:35.447 INFO:tasks.cephadm:First mgr is y 2026-03-09T21:08:35.447 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-09T21:08:35.447 DEBUG:teuthology.orchestra.run.vm07:> sudo hostname $(hostname -s) 2026-03-09T21:08:35.454 DEBUG:teuthology.orchestra.run.vm10:> sudo hostname $(hostname -s) 2026-03-09T21:08:35.463 INFO:tasks.cephadm:Downloading "compiled" cephadm from cachra 2026-03-09T21:08:35.463 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T21:08:36.058 INFO:tasks.cephadm:builder_project result: [{'url': 'https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/', 'chacra_url': 'https://1.chacra.ceph.com/repos/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/', 'ref': 'squid', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'distro': 'ubuntu', 'distro_version': '22.04', 'distro_codename': 'jammy', 'modified': '2026-02-25 19:37:07.680480', 'status': 'ready', 'flavor': 'default', 'project': 'ceph', 'archs': ['x86_64'], 'extra': {'version': '19.2.3-678-ge911bdeb', 'package_manager_version': '19.2.3-678-ge911bdeb-1jammy', 'build_url': 'https://jenkins.ceph.com/job/ceph-dev-pipeline/3275/', 'root_build_cause': '', 'node_name': '10.20.192.98+toko08', 'job_name': 'ceph-dev-pipeline'}}] 2026-03-09T21:08:36.630 INFO:tasks.util.chacra:got chacra host 1.chacra.ceph.com, ref squid, sha1 e911bdebe5c8faa3800735d1568fcdca65db60df from https://shaman.ceph.com/api/search/?project=ceph&distros=ubuntu%2F22.04%2Fx86_64&flavor=default&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T21:08:36.631 INFO:tasks.cephadm:Discovered cachra url: https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm 2026-03-09T21:08:36.631 INFO:tasks.cephadm:Downloading cephadm from url: https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm 2026-03-09T21:08:36.631 DEBUG:teuthology.orchestra.run.vm07:> curl --silent -L https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-09T21:08:37.978 INFO:teuthology.orchestra.run.vm07.stdout:-rw-rw-r-- 1 ubuntu ubuntu 795696 Mar 9 21:08 /home/ubuntu/cephtest/cephadm 2026-03-09T21:08:37.978 DEBUG:teuthology.orchestra.run.vm10:> curl --silent -L https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-09T21:08:39.337 INFO:teuthology.orchestra.run.vm10.stdout:-rw-rw-r-- 1 ubuntu ubuntu 795696 Mar 9 21:08 /home/ubuntu/cephtest/cephadm 2026-03-09T21:08:39.337 DEBUG:teuthology.orchestra.run.vm07:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-09T21:08:39.341 DEBUG:teuthology.orchestra.run.vm10:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-09T21:08:39.349 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-09T21:08:39.350 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-09T21:08:39.382 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-09T21:08:39.477 INFO:teuthology.orchestra.run.vm07.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T21:08:39.481 INFO:teuthology.orchestra.run.vm10.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T21:09:48.653 INFO:teuthology.orchestra.run.vm10.stdout:{ 2026-03-09T21:09:48.653 INFO:teuthology.orchestra.run.vm10.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-09T21:09:48.653 INFO:teuthology.orchestra.run.vm10.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-09T21:09:48.653 INFO:teuthology.orchestra.run.vm10.stdout: "repo_digests": [ 2026-03-09T21:09:48.653 INFO:teuthology.orchestra.run.vm10.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-09T21:09:48.653 INFO:teuthology.orchestra.run.vm10.stdout: ] 2026-03-09T21:09:48.653 INFO:teuthology.orchestra.run.vm10.stdout:} 2026-03-09T21:09:49.374 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T21:09:49.374 INFO:teuthology.orchestra.run.vm07.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-09T21:09:49.374 INFO:teuthology.orchestra.run.vm07.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-09T21:09:49.374 INFO:teuthology.orchestra.run.vm07.stdout: "repo_digests": [ 2026-03-09T21:09:49.374 INFO:teuthology.orchestra.run.vm07.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-09T21:09:49.374 INFO:teuthology.orchestra.run.vm07.stdout: ] 2026-03-09T21:09:49.374 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T21:09:49.384 DEBUG:teuthology.orchestra.run.vm07:> sudo mkdir -p /etc/ceph 2026-03-09T21:09:49.391 DEBUG:teuthology.orchestra.run.vm10:> sudo mkdir -p /etc/ceph 2026-03-09T21:09:49.400 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod 777 /etc/ceph 2026-03-09T21:09:49.441 DEBUG:teuthology.orchestra.run.vm10:> sudo chmod 777 /etc/ceph 2026-03-09T21:09:49.448 INFO:tasks.cephadm:Writing seed config... 2026-03-09T21:09:49.448 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-09T21:09:49.449 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-09T21:09:49.449 INFO:tasks.cephadm: override: [global] mon election default strategy = 3 2026-03-09T21:09:49.449 INFO:tasks.cephadm: override: [global] ms bind msgr1 = False 2026-03-09T21:09:49.449 INFO:tasks.cephadm: override: [global] ms bind msgr2 = True 2026-03-09T21:09:49.449 INFO:tasks.cephadm: override: [global] ms type = async 2026-03-09T21:09:49.449 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-09T21:09:49.449 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-09T21:09:49.449 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-09T21:09:49.449 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-09T21:09:49.449 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-09T21:09:49.449 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-09T21:09:49.449 INFO:tasks.cephadm: override: [osd] osd shutdown pgref assert = True 2026-03-09T21:09:49.449 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T21:09:49.449 DEBUG:teuthology.orchestra.run.vm07:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-09T21:09:49.485 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = 22c897f4-1bfc-11f1-adaa-13127443f8b3 mon election default strategy = 3 ms bind msgr1 = False ms bind msgr2 = True ms type = async [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = True bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-09T21:09:49.485 DEBUG:teuthology.orchestra.run.vm07:mon.a> sudo journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@mon.a.service 2026-03-09T21:09:49.527 DEBUG:teuthology.orchestra.run.vm07:mgr.y> sudo journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@mgr.y.service 2026-03-09T21:09:49.570 INFO:tasks.cephadm:Bootstrapping... 2026-03-09T21:09:49.571 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 192.168.123.107 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-09T21:09:49.711 INFO:teuthology.orchestra.run.vm07.stdout:-------------------------------------------------------------------------------- 2026-03-09T21:09:49.711 INFO:teuthology.orchestra.run.vm07.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', '22c897f4-1bfc-11f1-adaa-13127443f8b3', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-id', 'a', '--mgr-id', 'y', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-ip', '192.168.123.107', '--skip-admin-label'] 2026-03-09T21:09:49.712 INFO:teuthology.orchestra.run.vm07.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-09T21:09:49.712 INFO:teuthology.orchestra.run.vm07.stdout:Verifying podman|docker is present... 2026-03-09T21:09:49.712 INFO:teuthology.orchestra.run.vm07.stdout:Verifying lvm2 is present... 2026-03-09T21:09:49.712 INFO:teuthology.orchestra.run.vm07.stdout:Verifying time synchronization is in place... 2026-03-09T21:09:49.715 INFO:teuthology.orchestra.run.vm07.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-09T21:09:49.715 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-09T21:09:49.717 INFO:teuthology.orchestra.run.vm07.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-09T21:09:49.717 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stdout inactive 2026-03-09T21:09:49.720 INFO:teuthology.orchestra.run.vm07.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-09T21:09:49.720 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-09T21:09:49.722 INFO:teuthology.orchestra.run.vm07.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-09T21:09:49.723 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stdout inactive 2026-03-09T21:09:49.725 INFO:teuthology.orchestra.run.vm07.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-09T21:09:49.725 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stdout masked 2026-03-09T21:09:49.727 INFO:teuthology.orchestra.run.vm07.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-09T21:09:49.727 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stdout inactive 2026-03-09T21:09:49.730 INFO:teuthology.orchestra.run.vm07.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-09T21:09:49.730 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-09T21:09:49.733 INFO:teuthology.orchestra.run.vm07.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-09T21:09:49.733 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stdout inactive 2026-03-09T21:09:49.736 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stdout enabled 2026-03-09T21:09:49.740 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stdout active 2026-03-09T21:09:49.740 INFO:teuthology.orchestra.run.vm07.stdout:Unit ntp.service is enabled and running 2026-03-09T21:09:49.740 INFO:teuthology.orchestra.run.vm07.stdout:Repeating the final host check... 2026-03-09T21:09:49.740 INFO:teuthology.orchestra.run.vm07.stdout:docker (/usr/bin/docker) is present 2026-03-09T21:09:49.740 INFO:teuthology.orchestra.run.vm07.stdout:systemctl is present 2026-03-09T21:09:49.740 INFO:teuthology.orchestra.run.vm07.stdout:lvcreate is present 2026-03-09T21:09:49.743 INFO:teuthology.orchestra.run.vm07.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-09T21:09:49.743 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-09T21:09:49.746 INFO:teuthology.orchestra.run.vm07.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-09T21:09:49.746 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stdout inactive 2026-03-09T21:09:49.749 INFO:teuthology.orchestra.run.vm07.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-09T21:09:49.749 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-09T21:09:49.752 INFO:teuthology.orchestra.run.vm07.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-09T21:09:49.752 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stdout inactive 2026-03-09T21:09:49.755 INFO:teuthology.orchestra.run.vm07.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-09T21:09:49.755 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stdout masked 2026-03-09T21:09:49.757 INFO:teuthology.orchestra.run.vm07.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-09T21:09:49.757 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stdout inactive 2026-03-09T21:09:49.760 INFO:teuthology.orchestra.run.vm07.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-09T21:09:49.760 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-09T21:09:49.763 INFO:teuthology.orchestra.run.vm07.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-09T21:09:49.763 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stdout inactive 2026-03-09T21:09:49.766 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stdout enabled 2026-03-09T21:09:49.768 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stdout active 2026-03-09T21:09:49.768 INFO:teuthology.orchestra.run.vm07.stdout:Unit ntp.service is enabled and running 2026-03-09T21:09:49.768 INFO:teuthology.orchestra.run.vm07.stdout:Host looks OK 2026-03-09T21:09:49.768 INFO:teuthology.orchestra.run.vm07.stdout:Cluster fsid: 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:09:49.768 INFO:teuthology.orchestra.run.vm07.stdout:Acquiring lock 140273822534192 on /run/cephadm/22c897f4-1bfc-11f1-adaa-13127443f8b3.lock 2026-03-09T21:09:49.768 INFO:teuthology.orchestra.run.vm07.stdout:Lock 140273822534192 acquired on /run/cephadm/22c897f4-1bfc-11f1-adaa-13127443f8b3.lock 2026-03-09T21:09:49.769 INFO:teuthology.orchestra.run.vm07.stdout:Verifying IP 192.168.123.107 port 3300 ... 2026-03-09T21:09:49.769 INFO:teuthology.orchestra.run.vm07.stdout:Verifying IP 192.168.123.107 port 6789 ... 2026-03-09T21:09:49.769 INFO:teuthology.orchestra.run.vm07.stdout:Base mon IP(s) is [192.168.123.107:3300, 192.168.123.107:6789], mon addrv is [v2:192.168.123.107:3300,v1:192.168.123.107:6789] 2026-03-09T21:09:49.771 INFO:teuthology.orchestra.run.vm07.stdout:/usr/sbin/ip: stdout default via 192.168.123.1 dev ens3 proto dhcp src 192.168.123.107 metric 100 2026-03-09T21:09:49.771 INFO:teuthology.orchestra.run.vm07.stdout:/usr/sbin/ip: stdout 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 2026-03-09T21:09:49.771 INFO:teuthology.orchestra.run.vm07.stdout:/usr/sbin/ip: stdout 192.168.123.0/24 dev ens3 proto kernel scope link src 192.168.123.107 metric 100 2026-03-09T21:09:49.771 INFO:teuthology.orchestra.run.vm07.stdout:/usr/sbin/ip: stdout 192.168.123.1 dev ens3 proto dhcp scope link src 192.168.123.107 metric 100 2026-03-09T21:09:49.772 INFO:teuthology.orchestra.run.vm07.stdout:/usr/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-09T21:09:49.772 INFO:teuthology.orchestra.run.vm07.stdout:/usr/sbin/ip: stdout fe80::/64 dev ens3 proto kernel metric 256 pref medium 2026-03-09T21:09:49.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-09T21:09:49.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-09T21:09:49.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-09T21:09:49.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/sbin/ip: stdout 2: ens3: mtu 1500 state UP qlen 1000 2026-03-09T21:09:49.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/sbin/ip: stdout inet6 fe80::5055:ff:fe00:7/64 scope link 2026-03-09T21:09:49.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-09T21:09:49.774 INFO:teuthology.orchestra.run.vm07.stdout:Mon IP `192.168.123.107` is in CIDR network `192.168.123.0/24` 2026-03-09T21:09:49.774 INFO:teuthology.orchestra.run.vm07.stdout:Mon IP `192.168.123.107` is in CIDR network `192.168.123.0/24` 2026-03-09T21:09:49.774 INFO:teuthology.orchestra.run.vm07.stdout:Mon IP `192.168.123.107` is in CIDR network `192.168.123.1/32` 2026-03-09T21:09:49.774 INFO:teuthology.orchestra.run.vm07.stdout:Mon IP `192.168.123.107` is in CIDR network `192.168.123.1/32` 2026-03-09T21:09:49.774 INFO:teuthology.orchestra.run.vm07.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24', '192.168.123.1/32', '192.168.123.1/32'] 2026-03-09T21:09:49.774 INFO:teuthology.orchestra.run.vm07.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-09T21:09:49.775 INFO:teuthology.orchestra.run.vm07.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T21:09:50.988 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/docker: stdout e911bdebe5c8faa3800735d1568fcdca65db60df: Pulling from ceph-ci/ceph 2026-03-09T21:09:50.988 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/docker: stdout Digest: sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T21:09:50.988 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/docker: stdout Status: Image is up to date for quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T21:09:50.988 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/docker: stdout quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T21:09:51.139 INFO:teuthology.orchestra.run.vm07.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-09T21:09:51.139 INFO:teuthology.orchestra.run.vm07.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-09T21:09:51.139 INFO:teuthology.orchestra.run.vm07.stdout:Extracting ceph user uid/gid from container image... 2026-03-09T21:09:51.236 INFO:teuthology.orchestra.run.vm07.stdout:stat: stdout 167 167 2026-03-09T21:09:51.236 INFO:teuthology.orchestra.run.vm07.stdout:Creating initial keys... 2026-03-09T21:09:51.334 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-authtool: stdout AQAfN69pk4c8EhAAOvlgBqavgobcTwVIFW2tLw== 2026-03-09T21:09:51.434 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-authtool: stdout AQAfN69pwyUwGBAAJDYwvCQiESCIVl7KFPG4lg== 2026-03-09T21:09:51.553 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-authtool: stdout AQAfN69pAyV5HhAAz6/hw9qzJzovdG74VydSzA== 2026-03-09T21:09:51.553 INFO:teuthology.orchestra.run.vm07.stdout:Creating initial monmap... 2026-03-09T21:09:51.671 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-09T21:09:51.671 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-09T21:09:51.671 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:09:51.671 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-09T21:09:51.671 INFO:teuthology.orchestra.run.vm07.stdout:monmaptool for a [v2:192.168.123.107:3300,v1:192.168.123.107:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-09T21:09:51.671 INFO:teuthology.orchestra.run.vm07.stdout:setting min_mon_release = quincy 2026-03-09T21:09:51.671 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/monmaptool: set fsid to 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:09:51.671 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-09T21:09:51.672 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:09:51.672 INFO:teuthology.orchestra.run.vm07.stdout:Creating mon... 2026-03-09T21:09:51.794 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.747+0000 7fee28a17d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-09T21:09:51.794 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.747+0000 7fee28a17d80 1 imported monmap: 2026-03-09T21:09:51.794 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr epoch 0 2026-03-09T21:09:51.794 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:09:51.794 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr last_changed 2026-03-09T21:09:51.643158+0000 2026-03-09T21:09:51.794 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr created 2026-03-09T21:09:51.643158+0000 2026-03-09T21:09:51.794 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr min_mon_release 17 (quincy) 2026-03-09T21:09:51.794 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr election_strategy: 1 2026-03-09T21:09:51.794 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-09T21:09:51.794 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T21:09:51.794 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.747+0000 7fee28a17d80 0 /usr/bin/ceph-mon: set fsid to 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:09:51.794 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-09T21:09:51.794 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T21:09:51.794 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Git sha 0 2026-03-09T21:09:51.794 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T21:09:51.794 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: DB SUMMARY 2026-03-09T21:09:51.794 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T21:09:51.794 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: DB Session ID: 5R5P44KZTLY11HM40TMQ 2026-03-09T21:09:51.794 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T21:09:51.794 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 0, files: 2026-03-09T21:09:51.794 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T21:09:51.794 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 2026-03-09T21:09:51.794 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T21:09:51.794 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.error_if_exists: 0 2026-03-09T21:09:51.794 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.create_if_missing: 1 2026-03-09T21:09:51.794 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-09T21:09:51.795 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T21:09:51.795 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T21:09:51.795 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T21:09:51.795 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.env: 0x55fc4af8edc0 2026-03-09T21:09:51.795 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-09T21:09:51.795 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.info_log: 0x55fc6e9a6da0 2026-03-09T21:09:51.795 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-09T21:09:51.795 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.statistics: (nil) 2026-03-09T21:09:51.795 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.use_fsync: 0 2026-03-09T21:09:51.795 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-09T21:09:51.795 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T21:09:51.795 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T21:09:51.795 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-09T21:09:51.795 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-09T21:09:51.795 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.db_log_dir: 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.wal_dir: 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.write_buffer_manager: 0x55fc6e99d5e0 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.unordered_write: 0 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.row_cache: None 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.wal_filter: None 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.two_write_queues: 0 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.wal_compression: 0 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.atomic_flush: 0 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T21:09:51.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.max_open_files: -1 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Compression algorithms supported: 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: kZSTD supported: 0 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: kXpressCompression supported: 0 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: kZlibCompression supported: 1 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: [db/db_impl/db_impl_open.cc:317] Creating manifest 1 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.merge_operator: 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.compaction_filter: None 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fc6e999520) 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks: 1 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr pin_top_level_index_and_filter: 1 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr index_type: 0 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr data_block_index_type: 0 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr index_shortening: 1 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr data_block_hash_table_util_ratio: 0.750000 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr checksum: 4 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr no_block_cache: 0 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr block_cache: 0x55fc6e9bf350 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr block_cache_name: BinnedLRUCache 2026-03-09T21:09:51.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr block_cache_options: 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr capacity : 536870912 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr num_shard_bits : 4 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr strict_capacity_limit : 0 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr high_pri_pool_ratio: 0.000 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr block_cache_compressed: (nil) 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr persistent_cache: (nil) 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr block_size: 4096 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr block_size_deviation: 10 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr block_restart_interval: 16 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr index_block_restart_interval: 1 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr metadata_block_size: 4096 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr partition_filters: 0 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr use_delta_encoding: 1 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr filter_policy: bloomfilter 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr whole_key_filtering: 1 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr verify_compression: 0 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr read_amp_bytes_per_bit: 0 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr format_version: 5 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr enable_index_compression: 1 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr block_align: 0 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr max_auto_readahead_size: 262144 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr prepopulate_block_cache: 0 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr initial_auto_readahead_size: 8192 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr num_file_reads_for_auto_readahead: 2 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.compression: NoCompression 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.num_levels: 7 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T21:09:51.800 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.bloom_locality: 0 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.ttl: 2592000 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.enable_blob_files: false 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.min_blob_size: 0 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.751+0000 7fee28a17d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.755+0000 7fee28a17d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.755+0000 7fee28a17d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.755+0000 7fee28a17d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b946f624-2ad6-4489-b014-07249b1e6f6a 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.755+0000 7fee28a17d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 5 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.755+0000 7fee28a17d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55fc6e9c0e00 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.755+0000 7fee28a17d80 4 rocksdb: DB pointer 0x55fc6eaa4000 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.759+0000 7fee201a1640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.759+0000 7fee201a1640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr ** DB Stats ** 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T21:09:51.801 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr AddFile(Total Files): cumulative 0, interval 0 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr AddFile(Keys): cumulative 0, interval 0 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr Block cache BinnedLRUCache@0x55fc6e9bf350#8 capacity: 512.00 MB usage: 0.00 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 8e-06 secs_since: 0 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr Block cache entry stats(count,size,portion): Misc(1,0.00 KB,0%) 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr ** File Read Latency Histogram By Level [default] ** 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.759+0000 7fee28a17d80 4 rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.759+0000 7fee28a17d80 4 rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T21:09:51.759+0000 7fee28a17d80 0 /usr/bin/ceph-mon: created monfs at /var/lib/ceph/mon/ceph-a for mon.a 2026-03-09T21:09:51.802 INFO:teuthology.orchestra.run.vm07.stdout:create mon.a on 2026-03-09T21:09:51.957 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Removed /etc/systemd/system/multi-user.target.wants/ceph.target. 2026-03-09T21:09:52.121 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-09T21:09:52.304 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3.target → /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3.target. 2026-03-09T21:09:52.304 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3.target → /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3.target. 2026-03-09T21:09:52.496 INFO:teuthology.orchestra.run.vm07.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@mon.a 2026-03-09T21:09:52.496 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Failed to reset failed state of unit ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@mon.a.service: Unit ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@mon.a.service not loaded. 2026-03-09T21:09:52.637 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3.target.wants/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@mon.a.service → /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service. 2026-03-09T21:09:52.646 INFO:teuthology.orchestra.run.vm07.stdout:firewalld does not appear to be present 2026-03-09T21:09:52.646 INFO:teuthology.orchestra.run.vm07.stdout:Not possible to enable service . firewalld.service is not available 2026-03-09T21:09:52.647 INFO:teuthology.orchestra.run.vm07.stdout:Waiting for mon to start... 2026-03-09T21:09:52.647 INFO:teuthology.orchestra.run.vm07.stdout:Waiting for mon... 2026-03-09T21:09:53.053 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:52 vm07 bash[20311]: cluster 2026-03-09T21:09:52.800082+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T21:09:53.082 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout cluster: 2026-03-09T21:09:53.082 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout id: 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:09:53.082 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-09T21:09:53.082 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-09T21:09:53.082 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout services: 2026-03-09T21:09:53.082 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum a (age 0.239207s) 2026-03-09T21:09:53.082 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-09T21:09:53.082 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-09T21:09:53.082 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-09T21:09:53.082 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout data: 2026-03-09T21:09:53.082 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-09T21:09:53.082 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-09T21:09:53.082 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-09T21:09:53.082 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout pgs: 2026-03-09T21:09:53.082 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-09T21:09:53.083 INFO:teuthology.orchestra.run.vm07.stdout:mon is available 2026-03-09T21:09:53.083 INFO:teuthology.orchestra.run.vm07.stdout:Assimilating anything we can from ceph.conf... 2026-03-09T21:09:53.436 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-09T21:09:53.437 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout [global] 2026-03-09T21:09:53.437 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout fsid = 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:09:53.437 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-09T21:09:53.437 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.107:3300,v1:192.168.123.107:6789] 2026-03-09T21:09:53.437 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-09T21:09:53.437 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-09T21:09:53.437 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-09T21:09:53.437 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-09T21:09:53.437 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-09T21:09:53.437 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-09T21:09:53.437 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-09T21:09:53.437 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-09T21:09:53.437 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout [osd] 2026-03-09T21:09:53.437 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-09T21:09:53.437 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-09T21:09:53.437 INFO:teuthology.orchestra.run.vm07.stdout:Generating new minimal ceph.conf... 2026-03-09T21:09:53.616 INFO:teuthology.orchestra.run.vm07.stdout:Restarting the monitor... 2026-03-09T21:09:53.799 INFO:teuthology.orchestra.run.vm07.stdout:Setting public_network to 192.168.123.1/32,192.168.123.0/24 in mon config section 2026-03-09T21:09:53.872 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 systemd[1]: Stopping Ceph mon.a for 22c897f4-1bfc-11f1-adaa-13127443f8b3... 2026-03-09T21:09:53.872 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20311]: debug 2026-03-09T21:09:53.659+0000 7f218ea19640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T21:09:53.872 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20311]: debug 2026-03-09T21:09:53.659+0000 7f218ea19640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-09T21:09:53.872 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20685]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3-mon-a 2026-03-09T21:09:53.872 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20738]: Error response from daemon: No such container: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3-mon-a 2026-03-09T21:09:53.872 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 systemd[1]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@mon.a.service: Deactivated successfully. 2026-03-09T21:09:53.872 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 systemd[1]: Stopped Ceph mon.a for 22c897f4-1bfc-11f1-adaa-13127443f8b3. 2026-03-09T21:09:53.872 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 systemd[1]: Started Ceph mon.a for 22c897f4-1bfc-11f1-adaa-13127443f8b3. 2026-03-09T21:09:54.033 INFO:teuthology.orchestra.run.vm07.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-09T21:09:54.034 INFO:teuthology.orchestra.run.vm07.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-09T21:09:54.034 INFO:teuthology.orchestra.run.vm07.stdout:Creating mgr... 2026-03-09T21:09:54.034 INFO:teuthology.orchestra.run.vm07.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-09T21:09:54.034 INFO:teuthology.orchestra.run.vm07.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-09T21:09:54.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.899+0000 7efedc8f7d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-09T21:09:54.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.899+0000 7efedc8f7d80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 8 2026-03-09T21:09:54.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.899+0000 7efedc8f7d80 0 pidfile_write: ignore empty --pid-file 2026-03-09T21:09:54.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.899+0000 7efedc8f7d80 0 load: jerasure load: lrc 2026-03-09T21:09:54.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-09T21:09:54.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Git sha 0 2026-03-09T21:09:54.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T21:09:54.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: DB SUMMARY 2026-03-09T21:09:54.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: DB Session ID: KUNK6DT5WKJL6Y3G97GW 2026-03-09T21:09:54.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: CURRENT file: CURRENT 2026-03-09T21:09:54.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-09T21:09:54.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-09T21:09:54.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000009.log size: 76789 ; 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.error_if_exists: 0 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.create_if_missing: 0 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.env: 0x561a7af48dc0 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.info_log: 0x561a9db62d00 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.statistics: (nil) 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.use_fsync: 0 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.db_log_dir: 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.wal_dir: 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.write_buffer_manager: 0x561a9db67900 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.unordered_write: 0 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.row_cache: None 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.wal_filter: None 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.two_write_queues: 0 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-09T21:09:54.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.wal_compression: 0 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.atomic_flush: 0 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.max_open_files: -1 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Compression algorithms supported: 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: kZSTD supported: 0 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: kXpressCompression supported: 0 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-09T21:09:54.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: kZlibCompression supported: 1 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.merge_operator: 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.compaction_filter: None 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561a9db62480) 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: cache_index_and_filter_blocks: 1 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: pin_top_level_index_and_filter: 1 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: index_type: 0 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: data_block_index_type: 0 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: index_shortening: 1 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: data_block_hash_table_util_ratio: 0.750000 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: checksum: 4 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: no_block_cache: 0 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: block_cache: 0x561a9db89350 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: block_cache_name: BinnedLRUCache 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: block_cache_options: 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: capacity : 536870912 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: num_shard_bits : 4 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: strict_capacity_limit : 0 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: high_pri_pool_ratio: 0.000 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: block_cache_compressed: (nil) 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: persistent_cache: (nil) 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: block_size: 4096 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: block_size_deviation: 10 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: block_restart_interval: 16 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: index_block_restart_interval: 1 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: metadata_block_size: 4096 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: partition_filters: 0 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: use_delta_encoding: 1 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: filter_policy: bloomfilter 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: whole_key_filtering: 1 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: verify_compression: 0 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: read_amp_bytes_per_bit: 0 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: format_version: 5 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: enable_index_compression: 1 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: block_align: 0 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: max_auto_readahead_size: 262144 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: prepopulate_block_cache: 0 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: initial_auto_readahead_size: 8192 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: num_file_reads_for_auto_readahead: 2 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.compression: NoCompression 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-09T21:09:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.num_levels: 7 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T21:09:54.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.bloom_locality: 0 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.ttl: 2592000 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.enable_blob_files: false 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.min_blob_size: 0 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.903+0000 7efedc8f7d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.907+0000 7efedc8f7d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.907+0000 7efedc8f7d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.907+0000 7efedc8f7d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.907+0000 7efedc8f7d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.907+0000 7efedc8f7d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.907+0000 7efedc8f7d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.907+0000 7efedc8f7d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.907+0000 7efedc8f7d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b946f624-2ad6-4489-b014-07249b1e6f6a 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.907+0000 7efedc8f7d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773090593911064, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.907+0000 7efedc8f7d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.911+0000 7efedc8f7d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773090593916831, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 73643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 231, "table_properties": {"data_size": 71922, "index_size": 174, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 517, "raw_key_size": 10026, "raw_average_key_size": 49, "raw_value_size": 66337, "raw_average_value_size": 328, "num_data_blocks": 8, "num_entries": 202, "num_filter_entries": 202, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773090593, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b946f624-2ad6-4489-b014-07249b1e6f6a", "db_session_id": "KUNK6DT5WKJL6Y3G97GW", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.911+0000 7efedc8f7d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773090593916902, "job": 1, "event": "recovery_finished"} 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: debug 2026-03-09T21:09:53.911+0000 7efedc8f7d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: cluster 2026-03-09T21:09:53.923644+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: cluster 2026-03-09T21:09:53.923644+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: cluster 2026-03-09T21:09:53.923675+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: cluster 2026-03-09T21:09:53.923675+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: cluster 2026-03-09T21:09:53.923679+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: cluster 2026-03-09T21:09:53.923679+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: cluster 2026-03-09T21:09:53.923682+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-09T21:09:51.643158+0000 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: cluster 2026-03-09T21:09:53.923682+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-09T21:09:51.643158+0000 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: cluster 2026-03-09T21:09:53.923687+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-09T21:09:51.643158+0000 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: cluster 2026-03-09T21:09:53.923687+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-09T21:09:51.643158+0000 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: cluster 2026-03-09T21:09:53.923689+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: cluster 2026-03-09T21:09:53.923689+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: cluster 2026-03-09T21:09:53.923692+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: cluster 2026-03-09T21:09:53.923692+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: cluster 2026-03-09T21:09:53.923695+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: cluster 2026-03-09T21:09:53.923695+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: cluster 2026-03-09T21:09:53.923918+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: cluster 2026-03-09T21:09:53.923918+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: cluster 2026-03-09T21:09:53.923927+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: cluster 2026-03-09T21:09:53.923927+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: cluster 2026-03-09T21:09:53.924333+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-09T21:09:54.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:53 vm07 bash[20771]: cluster 2026-03-09T21:09:53.924333+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-09T21:09:54.214 INFO:teuthology.orchestra.run.vm07.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@mgr.y 2026-03-09T21:09:54.215 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Failed to reset failed state of unit ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@mgr.y.service: Unit ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@mgr.y.service not loaded. 2026-03-09T21:09:54.382 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3.target.wants/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@mgr.y.service → /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service. 2026-03-09T21:09:54.389 INFO:teuthology.orchestra.run.vm07.stdout:firewalld does not appear to be present 2026-03-09T21:09:54.389 INFO:teuthology.orchestra.run.vm07.stdout:Not possible to enable service . firewalld.service is not available 2026-03-09T21:09:54.389 INFO:teuthology.orchestra.run.vm07.stdout:firewalld does not appear to be present 2026-03-09T21:09:54.389 INFO:teuthology.orchestra.run.vm07.stdout:Not possible to open ports <[9283, 8765]>. firewalld.service is not available 2026-03-09T21:09:54.389 INFO:teuthology.orchestra.run.vm07.stdout:Waiting for mgr to start... 2026-03-09T21:09:54.389 INFO:teuthology.orchestra.run.vm07.stdout:Waiting for mgr... 2026-03-09T21:09:54.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:54 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:09:54.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:54 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:09:54.621 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-09T21:09:54.621 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout { 2026-03-09T21:09:54.621 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "fsid": "22c897f4-1bfc-11f1-adaa-13127443f8b3", 2026-03-09T21:09:54.621 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T21:09:54.621 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T21:09:54.621 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T21:09:54.621 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T21:09:54.621 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T21:09:54.621 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T21:09:54.621 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T21:09:54.621 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 0 2026-03-09T21:09:54.621 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout ], 2026-03-09T21:09:54.621 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T21:09:54.621 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T21:09:54.621 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout ], 2026-03-09T21:09:54.621 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-09T21:09:54.621 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T21:09:54.621 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T21:09:54.621 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T21:09:54.621 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T21:09:54.621 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T21:09:54.621 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T21:09:54.621 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T21:09:54.621 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T21:09:54.621 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T21:09:54.621 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T21:09:54.621 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T21:09:54.621 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T21:09:54.621 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T21:09:52:806787+0000", 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout ], 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T21:09:52.808559+0000", 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout } 2026-03-09T21:09:54.622 INFO:teuthology.orchestra.run.vm07.stdout:mgr not available, waiting (1/15)... 2026-03-09T21:09:54.754 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:09:54 vm07 bash[21040]: debug 2026-03-09T21:09:54.635+0000 7f60a3398140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T21:09:54.754 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:09:54 vm07 bash[21040]: debug 2026-03-09T21:09:54.747+0000 7f60a3398140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T21:09:55.365 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:09:55 vm07 bash[21040]: debug 2026-03-09T21:09:55.023+0000 7f60a3398140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T21:09:55.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:55 vm07 bash[20771]: audit 2026-03-09T21:09:53.993001+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.107:0/1076973491' entity='client.admin' 2026-03-09T21:09:55.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:55 vm07 bash[20771]: audit 2026-03-09T21:09:53.993001+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.107:0/1076973491' entity='client.admin' 2026-03-09T21:09:55.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:55 vm07 bash[20771]: audit 2026-03-09T21:09:54.568490+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.107:0/2876237503' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T21:09:55.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:55 vm07 bash[20771]: audit 2026-03-09T21:09:54.568490+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.107:0/2876237503' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T21:09:55.865 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:09:55 vm07 bash[21040]: debug 2026-03-09T21:09:55.483+0000 7f60a3398140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T21:09:55.865 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:09:55 vm07 bash[21040]: debug 2026-03-09T21:09:55.575+0000 7f60a3398140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T21:09:55.865 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:09:55 vm07 bash[21040]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T21:09:55.865 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:09:55 vm07 bash[21040]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T21:09:55.865 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:09:55 vm07 bash[21040]: from numpy import show_config as show_numpy_config 2026-03-09T21:09:55.865 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:09:55 vm07 bash[21040]: debug 2026-03-09T21:09:55.719+0000 7f60a3398140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T21:09:56.115 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:09:55 vm07 bash[21040]: debug 2026-03-09T21:09:55.859+0000 7f60a3398140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T21:09:56.116 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:09:55 vm07 bash[21040]: debug 2026-03-09T21:09:55.899+0000 7f60a3398140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T21:09:56.116 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:09:55 vm07 bash[21040]: debug 2026-03-09T21:09:55.939+0000 7f60a3398140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T21:09:56.116 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:09:55 vm07 bash[21040]: debug 2026-03-09T21:09:55.987+0000 7f60a3398140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T21:09:56.116 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:09:56 vm07 bash[21040]: debug 2026-03-09T21:09:56.039+0000 7f60a3398140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T21:09:56.736 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:09:56 vm07 bash[21040]: debug 2026-03-09T21:09:56.479+0000 7f60a3398140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T21:09:56.736 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:09:56 vm07 bash[21040]: debug 2026-03-09T21:09:56.515+0000 7f60a3398140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T21:09:56.736 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:09:56 vm07 bash[21040]: debug 2026-03-09T21:09:56.547+0000 7f60a3398140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T21:09:56.993 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-09T21:09:56.993 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout { 2026-03-09T21:09:56.993 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "fsid": "22c897f4-1bfc-11f1-adaa-13127443f8b3", 2026-03-09T21:09:56.993 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T21:09:56.993 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T21:09:56.993 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T21:09:56.993 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T21:09:56.993 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T21:09:56.993 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T21:09:56.993 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T21:09:56.993 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 0 2026-03-09T21:09:56.993 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout ], 2026-03-09T21:09:56.993 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T21:09:56.993 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T21:09:56.993 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout ], 2026-03-09T21:09:56.993 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "quorum_age": 2, 2026-03-09T21:09:56.993 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T21:09:56.993 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T21:09:56.993 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T21:09:56.993 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T21:09:56.993 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T21:09:56.993 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T21:09:56.993 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T21:09:56.993 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T21:09:56.993 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T21:09:56.993 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T21:09:56.993 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T21:09:56.993 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T21:09:56.993 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T21:09:56.994 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T21:09:56.994 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T21:09:56.994 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T21:09:56.994 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T21:09:56.994 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T21:09:56.994 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T21:09:56.994 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T21:09:56.995 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T21:09:56.995 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T21:09:56.995 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T21:09:56.995 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T21:09:56.995 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T21:09:56.995 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T21:09:56.995 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T21:09:52:806787+0000", 2026-03-09T21:09:56.995 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T21:09:56.995 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T21:09:56.995 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T21:09:56.995 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T21:09:56.995 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-09T21:09:56.995 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T21:09:56.995 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T21:09:56.995 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T21:09:56.995 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T21:09:56.995 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T21:09:56.995 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout ], 2026-03-09T21:09:56.995 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T21:09:56.995 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T21:09:56.995 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T21:09:56.995 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T21:09:56.995 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T21:09:52.808559+0000", 2026-03-09T21:09:56.995 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T21:09:56.995 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T21:09:56.995 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T21:09:56.995 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout } 2026-03-09T21:09:56.995 INFO:teuthology.orchestra.run.vm07.stdout:mgr not available, waiting (2/15)... 2026-03-09T21:09:57.092 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:56 vm07 bash[20771]: audit 2026-03-09T21:09:56.811941+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.107:0/742307409' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T21:09:57.092 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:56 vm07 bash[20771]: audit 2026-03-09T21:09:56.811941+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.107:0/742307409' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T21:09:57.092 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:09:56 vm07 bash[21040]: debug 2026-03-09T21:09:56.731+0000 7f60a3398140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T21:09:57.092 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:09:56 vm07 bash[21040]: debug 2026-03-09T21:09:56.779+0000 7f60a3398140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T21:09:57.092 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:09:56 vm07 bash[21040]: debug 2026-03-09T21:09:56.819+0000 7f60a3398140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T21:09:57.092 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:09:56 vm07 bash[21040]: debug 2026-03-09T21:09:56.931+0000 7f60a3398140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T21:09:57.366 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:09:57 vm07 bash[21040]: debug 2026-03-09T21:09:57.087+0000 7f60a3398140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T21:09:57.366 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:09:57 vm07 bash[21040]: debug 2026-03-09T21:09:57.255+0000 7f60a3398140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T21:09:57.366 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:09:57 vm07 bash[21040]: debug 2026-03-09T21:09:57.287+0000 7f60a3398140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T21:09:57.366 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:09:57 vm07 bash[21040]: debug 2026-03-09T21:09:57.331+0000 7f60a3398140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T21:09:57.800 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:09:57 vm07 bash[21040]: debug 2026-03-09T21:09:57.491+0000 7f60a3398140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T21:09:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:57 vm07 bash[20771]: cluster 2026-03-09T21:09:57.800793+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon y 2026-03-09T21:09:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:57 vm07 bash[20771]: cluster 2026-03-09T21:09:57.800793+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon y 2026-03-09T21:09:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:57 vm07 bash[20771]: cluster 2026-03-09T21:09:57.806245+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: y(active, starting, since 0.00553332s) 2026-03-09T21:09:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:57 vm07 bash[20771]: cluster 2026-03-09T21:09:57.806245+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: y(active, starting, since 0.00553332s) 2026-03-09T21:09:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:57 vm07 bash[20771]: audit 2026-03-09T21:09:57.808939+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T21:09:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:57 vm07 bash[20771]: audit 2026-03-09T21:09:57.808939+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T21:09:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:57 vm07 bash[20771]: audit 2026-03-09T21:09:57.809181+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T21:09:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:57 vm07 bash[20771]: audit 2026-03-09T21:09:57.809181+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T21:09:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:57 vm07 bash[20771]: audit 2026-03-09T21:09:57.809414+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T21:09:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:57 vm07 bash[20771]: audit 2026-03-09T21:09:57.809414+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T21:09:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:57 vm07 bash[20771]: audit 2026-03-09T21:09:57.810630+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:09:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:57 vm07 bash[20771]: audit 2026-03-09T21:09:57.810630+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:09:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:57 vm07 bash[20771]: audit 2026-03-09T21:09:57.810741+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T21:09:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:57 vm07 bash[20771]: audit 2026-03-09T21:09:57.810741+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T21:09:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:57 vm07 bash[20771]: cluster 2026-03-09T21:09:57.819743+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon y is now available 2026-03-09T21:09:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:57 vm07 bash[20771]: cluster 2026-03-09T21:09:57.819743+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon y is now available 2026-03-09T21:09:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:57 vm07 bash[20771]: audit 2026-03-09T21:09:57.834173+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' 2026-03-09T21:09:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:57 vm07 bash[20771]: audit 2026-03-09T21:09:57.834173+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' 2026-03-09T21:09:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:57 vm07 bash[20771]: audit 2026-03-09T21:09:57.838522+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' 2026-03-09T21:09:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:57 vm07 bash[20771]: audit 2026-03-09T21:09:57.838522+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' 2026-03-09T21:09:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:57 vm07 bash[20771]: audit 2026-03-09T21:09:57.840593+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' 2026-03-09T21:09:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:57 vm07 bash[20771]: audit 2026-03-09T21:09:57.840593+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' 2026-03-09T21:09:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:57 vm07 bash[20771]: audit 2026-03-09T21:09:57.842931+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T21:09:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:57 vm07 bash[20771]: audit 2026-03-09T21:09:57.842931+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T21:09:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:57 vm07 bash[20771]: audit 2026-03-09T21:09:57.849054+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T21:09:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:57 vm07 bash[20771]: audit 2026-03-09T21:09:57.849054+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T21:09:58.116 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:09:57 vm07 bash[21040]: debug 2026-03-09T21:09:57.795+0000 7f60a3398140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T21:09:59.570 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-09T21:09:59.570 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout { 2026-03-09T21:09:59.570 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "fsid": "22c897f4-1bfc-11f1-adaa-13127443f8b3", 2026-03-09T21:09:59.570 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T21:09:59.570 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T21:09:59.570 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T21:09:59.571 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T21:09:59.571 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T21:09:59.571 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T21:09:59.571 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T21:09:59.571 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 0 2026-03-09T21:09:59.571 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout ], 2026-03-09T21:09:59.571 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T21:09:59.571 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T21:09:59.571 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout ], 2026-03-09T21:09:59.571 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-09T21:09:59.571 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T21:09:59.571 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T21:09:59.571 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T21:09:59.571 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T21:09:59.571 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T21:09:59.571 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T21:09:59.571 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T21:09:59.571 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T21:09:59.571 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T21:09:59.571 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T21:09:59.571 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T21:09:59.572 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T21:09:59.572 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T21:09:59.572 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T21:09:59.572 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T21:09:59.572 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T21:09:59.572 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T21:09:59.572 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T21:09:59.572 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T21:09:59.572 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T21:09:59.572 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T21:09:59.572 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T21:09:59.572 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T21:09:59.572 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T21:09:59.572 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T21:09:59.572 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T21:09:59.572 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T21:09:52:806787+0000", 2026-03-09T21:09:59.572 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T21:09:59.572 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T21:09:59.572 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T21:09:59.572 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T21:09:59.572 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T21:09:59.572 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T21:09:59.572 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T21:09:59.572 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T21:09:59.572 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T21:09:59.572 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T21:09:59.572 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout ], 2026-03-09T21:09:59.572 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T21:09:59.573 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T21:09:59.573 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T21:09:59.573 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T21:09:59.573 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T21:09:52.808559+0000", 2026-03-09T21:09:59.573 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T21:09:59.573 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T21:09:59.573 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T21:09:59.573 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout } 2026-03-09T21:09:59.573 INFO:teuthology.orchestra.run.vm07.stdout:mgr is available 2026-03-09T21:10:00.073 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-09T21:10:00.073 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout [global] 2026-03-09T21:10:00.073 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout fsid = 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:10:00.073 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-09T21:10:00.073 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.107:3300,v1:192.168.123.107:6789] 2026-03-09T21:10:00.073 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-09T21:10:00.073 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-09T21:10:00.073 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-09T21:10:00.073 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-09T21:10:00.073 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-09T21:10:00.073 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-09T21:10:00.074 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-09T21:10:00.074 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-09T21:10:00.074 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout [osd] 2026-03-09T21:10:00.074 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-09T21:10:00.074 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-09T21:10:00.074 INFO:teuthology.orchestra.run.vm07.stdout:Enabling cephadm module... 2026-03-09T21:10:00.086 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:59 vm07 bash[20771]: cluster 2026-03-09T21:09:58.810845+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: y(active, since 1.01013s) 2026-03-09T21:10:00.086 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:59 vm07 bash[20771]: cluster 2026-03-09T21:09:58.810845+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: y(active, since 1.01013s) 2026-03-09T21:10:00.086 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:59 vm07 bash[20771]: audit 2026-03-09T21:09:59.536915+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.107:0/1267886844' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T21:10:00.086 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:59 vm07 bash[20771]: audit 2026-03-09T21:09:59.536915+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.107:0/1267886844' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T21:10:00.086 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:59 vm07 bash[20771]: audit 2026-03-09T21:09:59.781874+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.107:0/4264837363' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T21:10:00.086 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:09:59 vm07 bash[20771]: audit 2026-03-09T21:09:59.781874+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.107:0/4264837363' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T21:10:01.363 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:01 vm07 bash[20771]: cluster 2026-03-09T21:10:00.069310+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-09T21:10:01.363 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:01 vm07 bash[20771]: cluster 2026-03-09T21:10:00.069310+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-09T21:10:01.363 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:01 vm07 bash[20771]: audit 2026-03-09T21:10:00.331092+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.107:0/585480375' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T21:10:01.363 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:01 vm07 bash[20771]: audit 2026-03-09T21:10:00.331092+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.107:0/585480375' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T21:10:01.363 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:01 vm07 bash[21040]: ignoring --setuser ceph since I am not root 2026-03-09T21:10:01.363 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:01 vm07 bash[21040]: ignoring --setgroup ceph since I am not root 2026-03-09T21:10:01.363 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:01 vm07 bash[21040]: debug 2026-03-09T21:10:01.195+0000 7fe1b488e140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T21:10:01.363 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:01 vm07 bash[21040]: debug 2026-03-09T21:10:01.239+0000 7fe1b488e140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T21:10:01.430 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout { 2026-03-09T21:10:01.430 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 5, 2026-03-09T21:10:01.430 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T21:10:01.430 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "active_name": "y", 2026-03-09T21:10:01.430 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-09T21:10:01.430 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout } 2026-03-09T21:10:01.430 INFO:teuthology.orchestra.run.vm07.stdout:Waiting for the mgr to restart... 2026-03-09T21:10:01.430 INFO:teuthology.orchestra.run.vm07.stdout:Waiting for mgr epoch 5... 2026-03-09T21:10:01.615 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:01 vm07 bash[21040]: debug 2026-03-09T21:10:01.359+0000 7fe1b488e140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T21:10:02.074 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:01 vm07 bash[21040]: debug 2026-03-09T21:10:01.659+0000 7fe1b488e140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T21:10:02.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:02 vm07 bash[20771]: audit 2026-03-09T21:10:01.072160+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.107:0/585480375' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T21:10:02.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:02 vm07 bash[20771]: audit 2026-03-09T21:10:01.072160+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.107:0/585480375' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T21:10:02.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:02 vm07 bash[20771]: cluster 2026-03-09T21:10:01.074102+0000 mon.a (mon.0) 34 : cluster [DBG] mgrmap e5: y(active, since 3s) 2026-03-09T21:10:02.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:02 vm07 bash[20771]: cluster 2026-03-09T21:10:01.074102+0000 mon.a (mon.0) 34 : cluster [DBG] mgrmap e5: y(active, since 3s) 2026-03-09T21:10:02.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:02 vm07 bash[20771]: audit 2026-03-09T21:10:01.385753+0000 mon.a (mon.0) 35 : audit [DBG] from='client.? 192.168.123.107:0/558367829' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T21:10:02.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:02 vm07 bash[20771]: audit 2026-03-09T21:10:01.385753+0000 mon.a (mon.0) 35 : audit [DBG] from='client.? 192.168.123.107:0/558367829' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T21:10:02.366 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:02 vm07 bash[21040]: debug 2026-03-09T21:10:02.087+0000 7fe1b488e140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T21:10:02.366 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:02 vm07 bash[21040]: debug 2026-03-09T21:10:02.171+0000 7fe1b488e140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T21:10:02.366 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:02 vm07 bash[21040]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T21:10:02.366 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:02 vm07 bash[21040]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T21:10:02.366 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:02 vm07 bash[21040]: from numpy import show_config as show_numpy_config 2026-03-09T21:10:02.366 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:02 vm07 bash[21040]: debug 2026-03-09T21:10:02.291+0000 7fe1b488e140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T21:10:02.866 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:02 vm07 bash[21040]: debug 2026-03-09T21:10:02.419+0000 7fe1b488e140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T21:10:02.866 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:02 vm07 bash[21040]: debug 2026-03-09T21:10:02.455+0000 7fe1b488e140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T21:10:02.866 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:02 vm07 bash[21040]: debug 2026-03-09T21:10:02.491+0000 7fe1b488e140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T21:10:02.866 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:02 vm07 bash[21040]: debug 2026-03-09T21:10:02.531+0000 7fe1b488e140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T21:10:02.866 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:02 vm07 bash[21040]: debug 2026-03-09T21:10:02.579+0000 7fe1b488e140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T21:10:03.257 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:03 vm07 bash[21040]: debug 2026-03-09T21:10:02.999+0000 7fe1b488e140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T21:10:03.258 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:03 vm07 bash[21040]: debug 2026-03-09T21:10:03.035+0000 7fe1b488e140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T21:10:03.258 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:03 vm07 bash[21040]: debug 2026-03-09T21:10:03.071+0000 7fe1b488e140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T21:10:03.258 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:03 vm07 bash[21040]: debug 2026-03-09T21:10:03.211+0000 7fe1b488e140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T21:10:03.557 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:03 vm07 bash[21040]: debug 2026-03-09T21:10:03.251+0000 7fe1b488e140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T21:10:03.557 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:03 vm07 bash[21040]: debug 2026-03-09T21:10:03.291+0000 7fe1b488e140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T21:10:03.557 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:03 vm07 bash[21040]: debug 2026-03-09T21:10:03.395+0000 7fe1b488e140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T21:10:03.866 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:03 vm07 bash[21040]: debug 2026-03-09T21:10:03.551+0000 7fe1b488e140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T21:10:03.866 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:03 vm07 bash[21040]: debug 2026-03-09T21:10:03.723+0000 7fe1b488e140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T21:10:03.866 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:03 vm07 bash[21040]: debug 2026-03-09T21:10:03.755+0000 7fe1b488e140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T21:10:03.866 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:03 vm07 bash[21040]: debug 2026-03-09T21:10:03.795+0000 7fe1b488e140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T21:10:04.207 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:03 vm07 bash[21040]: debug 2026-03-09T21:10:03.931+0000 7fe1b488e140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T21:10:04.207 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:04 vm07 bash[21040]: debug 2026-03-09T21:10:04.147+0000 7fe1b488e140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: cluster 2026-03-09T21:10:04.153207+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon y restarted 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: cluster 2026-03-09T21:10:04.153207+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon y restarted 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: cluster 2026-03-09T21:10:04.153406+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon y 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: cluster 2026-03-09T21:10:04.153406+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon y 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: cluster 2026-03-09T21:10:04.157744+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: cluster 2026-03-09T21:10:04.157744+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: cluster 2026-03-09T21:10:04.158272+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e6: y(active, starting, since 0.00494476s) 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: cluster 2026-03-09T21:10:04.158272+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e6: y(active, starting, since 0.00494476s) 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: audit 2026-03-09T21:10:04.161061+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: audit 2026-03-09T21:10:04.161061+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: audit 2026-03-09T21:10:04.161905+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: audit 2026-03-09T21:10:04.161905+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: audit 2026-03-09T21:10:04.162780+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: audit 2026-03-09T21:10:04.162780+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: audit 2026-03-09T21:10:04.162975+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: audit 2026-03-09T21:10:04.162975+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: audit 2026-03-09T21:10:04.163195+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: audit 2026-03-09T21:10:04.163195+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: cluster 2026-03-09T21:10:04.169280+0000 mon.a (mon.0) 45 : cluster [INF] Manager daemon y is now available 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: cluster 2026-03-09T21:10:04.169280+0000 mon.a (mon.0) 45 : cluster [INF] Manager daemon y is now available 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: audit 2026-03-09T21:10:04.177769+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: audit 2026-03-09T21:10:04.177769+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: audit 2026-03-09T21:10:04.180478+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: audit 2026-03-09T21:10:04.180478+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: audit 2026-03-09T21:10:04.194067+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: audit 2026-03-09T21:10:04.194067+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: audit 2026-03-09T21:10:04.194397+0000 mon.a (mon.0) 49 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: audit 2026-03-09T21:10:04.194397+0000 mon.a (mon.0) 49 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: audit 2026-03-09T21:10:04.195517+0000 mon.a (mon.0) 50 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: audit 2026-03-09T21:10:04.195517+0000 mon.a (mon.0) 50 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: audit 2026-03-09T21:10:04.196285+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T21:10:04.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:04 vm07 bash[20771]: audit 2026-03-09T21:10:04.196285+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T21:10:05.214 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout { 2026-03-09T21:10:05.214 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 7, 2026-03-09T21:10:05.214 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-09T21:10:05.214 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout } 2026-03-09T21:10:05.214 INFO:teuthology.orchestra.run.vm07.stdout:mgr epoch 5 is available 2026-03-09T21:10:05.214 INFO:teuthology.orchestra.run.vm07.stdout:Setting orchestrator backend to cephadm... 2026-03-09T21:10:05.779 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:05 vm07 bash[20771]: cephadm 2026-03-09T21:10:04.175500+0000 mgr.y (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-09T21:10:05.779 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:05 vm07 bash[20771]: cephadm 2026-03-09T21:10:04.175500+0000 mgr.y (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-09T21:10:05.779 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:05 vm07 bash[20771]: audit 2026-03-09T21:10:04.657478+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:05.779 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:05 vm07 bash[20771]: audit 2026-03-09T21:10:04.657478+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:05.779 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:05 vm07 bash[20771]: audit 2026-03-09T21:10:04.659800+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:05.779 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:05 vm07 bash[20771]: audit 2026-03-09T21:10:04.659800+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:05.779 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:05 vm07 bash[20771]: cluster 2026-03-09T21:10:05.167121+0000 mon.a (mon.0) 54 : cluster [DBG] mgrmap e7: y(active, since 1.01379s) 2026-03-09T21:10:05.779 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:05 vm07 bash[20771]: cluster 2026-03-09T21:10:05.167121+0000 mon.a (mon.0) 54 : cluster [DBG] mgrmap e7: y(active, since 1.01379s) 2026-03-09T21:10:05.779 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:05 vm07 bash[20771]: audit 2026-03-09T21:10:05.419972+0000 mon.a (mon.0) 55 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:05.779 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:05 vm07 bash[20771]: audit 2026-03-09T21:10:05.419972+0000 mon.a (mon.0) 55 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:05.779 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:05 vm07 bash[20771]: audit 2026-03-09T21:10:05.509378+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:05.779 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:05 vm07 bash[20771]: audit 2026-03-09T21:10:05.509378+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:05.779 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:05 vm07 bash[20771]: audit 2026-03-09T21:10:05.514619+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:05.779 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:05 vm07 bash[20771]: audit 2026-03-09T21:10:05.514619+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:05.797 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-09T21:10:05.797 INFO:teuthology.orchestra.run.vm07.stdout:Generating ssh key... 2026-03-09T21:10:06.115 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:06 vm07 bash[21040]: Generating public/private ed25519 key pair. 2026-03-09T21:10:06.116 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:06 vm07 bash[21040]: Your identification has been saved in /tmp/tmpn1l777f5/key 2026-03-09T21:10:06.116 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:06 vm07 bash[21040]: Your public key has been saved in /tmp/tmpn1l777f5/key.pub 2026-03-09T21:10:06.116 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:06 vm07 bash[21040]: The key fingerprint is: 2026-03-09T21:10:06.116 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:06 vm07 bash[21040]: SHA256:M2/aLQl2IvvE0P9y0rRYM/Tf4Fpuv5Vwk81MZuZGwpk ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:10:06.116 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:06 vm07 bash[21040]: The key's randomart image is: 2026-03-09T21:10:06.116 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:06 vm07 bash[21040]: +--[ED25519 256]--+ 2026-03-09T21:10:06.116 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:06 vm07 bash[21040]: | | 2026-03-09T21:10:06.116 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:06 vm07 bash[21040]: | . o | 2026-03-09T21:10:06.116 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:06 vm07 bash[21040]: | E *| 2026-03-09T21:10:06.116 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:06 vm07 bash[21040]: | . . @o| 2026-03-09T21:10:06.116 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:06 vm07 bash[21040]: | . S . o +*| 2026-03-09T21:10:06.116 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:06 vm07 bash[21040]: | .o+=. = =.o| 2026-03-09T21:10:06.116 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:06 vm07 bash[21040]: | +o++* =.+o| 2026-03-09T21:10:06.116 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:06 vm07 bash[21040]: | .. +*o+oo +| 2026-03-09T21:10:06.116 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:06 vm07 bash[21040]: | .o .=+o.oo| 2026-03-09T21:10:06.116 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:06 vm07 bash[21040]: +----[SHA256]-----+ 2026-03-09T21:10:06.553 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMiRdo+3e4EGtGa2okT2caRGcbfUa4+9VemqVJhsxYm ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:10:06.553 INFO:teuthology.orchestra.run.vm07.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-09T21:10:06.553 INFO:teuthology.orchestra.run.vm07.stdout:Adding key to root@localhost authorized_keys... 2026-03-09T21:10:06.553 INFO:teuthology.orchestra.run.vm07.stdout:Adding host vm07... 2026-03-09T21:10:06.785 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:06 vm07 bash[20771]: audit 2026-03-09T21:10:05.160489+0000 mgr.y (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T21:10:06.786 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:06 vm07 bash[20771]: audit 2026-03-09T21:10:05.160489+0000 mgr.y (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T21:10:06.786 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:06 vm07 bash[20771]: audit 2026-03-09T21:10:05.164877+0000 mgr.y (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T21:10:06.786 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:06 vm07 bash[20771]: audit 2026-03-09T21:10:05.164877+0000 mgr.y (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T21:10:06.786 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:06 vm07 bash[20771]: cephadm 2026-03-09T21:10:05.205334+0000 mgr.y (mgr.14118) 4 : cephadm [INF] [09/Mar/2026:21:10:05] ENGINE Bus STARTING 2026-03-09T21:10:06.786 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:06 vm07 bash[20771]: cephadm 2026-03-09T21:10:05.205334+0000 mgr.y (mgr.14118) 4 : cephadm [INF] [09/Mar/2026:21:10:05] ENGINE Bus STARTING 2026-03-09T21:10:06.786 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:06 vm07 bash[20771]: cephadm 2026-03-09T21:10:05.306952+0000 mgr.y (mgr.14118) 5 : cephadm [INF] [09/Mar/2026:21:10:05] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T21:10:06.786 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:06 vm07 bash[20771]: cephadm 2026-03-09T21:10:05.306952+0000 mgr.y (mgr.14118) 5 : cephadm [INF] [09/Mar/2026:21:10:05] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T21:10:06.786 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:06 vm07 bash[20771]: cephadm 2026-03-09T21:10:05.419250+0000 mgr.y (mgr.14118) 6 : cephadm [INF] [09/Mar/2026:21:10:05] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T21:10:06.786 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:06 vm07 bash[20771]: cephadm 2026-03-09T21:10:05.419250+0000 mgr.y (mgr.14118) 6 : cephadm [INF] [09/Mar/2026:21:10:05] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T21:10:06.786 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:06 vm07 bash[20771]: cephadm 2026-03-09T21:10:05.419296+0000 mgr.y (mgr.14118) 7 : cephadm [INF] [09/Mar/2026:21:10:05] ENGINE Bus STARTED 2026-03-09T21:10:06.786 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:06 vm07 bash[20771]: cephadm 2026-03-09T21:10:05.419296+0000 mgr.y (mgr.14118) 7 : cephadm [INF] [09/Mar/2026:21:10:05] ENGINE Bus STARTED 2026-03-09T21:10:06.786 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:06 vm07 bash[20771]: cephadm 2026-03-09T21:10:05.419828+0000 mgr.y (mgr.14118) 8 : cephadm [INF] [09/Mar/2026:21:10:05] ENGINE Client ('192.168.123.107', 51620) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T21:10:06.786 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:06 vm07 bash[20771]: cephadm 2026-03-09T21:10:05.419828+0000 mgr.y (mgr.14118) 8 : cephadm [INF] [09/Mar/2026:21:10:05] ENGINE Client ('192.168.123.107', 51620) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T21:10:06.786 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:06 vm07 bash[20771]: audit 2026-03-09T21:10:05.505590+0000 mgr.y (mgr.14118) 9 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:06.786 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:06 vm07 bash[20771]: audit 2026-03-09T21:10:05.505590+0000 mgr.y (mgr.14118) 9 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:06.786 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:06 vm07 bash[20771]: audit 2026-03-09T21:10:05.762325+0000 mgr.y (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:06.786 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:06 vm07 bash[20771]: audit 2026-03-09T21:10:05.762325+0000 mgr.y (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:06.786 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:06 vm07 bash[20771]: audit 2026-03-09T21:10:06.002761+0000 mgr.y (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:06.786 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:06 vm07 bash[20771]: audit 2026-03-09T21:10:06.002761+0000 mgr.y (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:06.786 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:06 vm07 bash[20771]: cephadm 2026-03-09T21:10:06.003001+0000 mgr.y (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-09T21:10:06.786 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:06 vm07 bash[20771]: cephadm 2026-03-09T21:10:06.003001+0000 mgr.y (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-09T21:10:06.786 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:06 vm07 bash[20771]: audit 2026-03-09T21:10:06.165888+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:06.786 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:06 vm07 bash[20771]: audit 2026-03-09T21:10:06.165888+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:06.786 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:06 vm07 bash[20771]: audit 2026-03-09T21:10:06.184000+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:06.786 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:06 vm07 bash[20771]: audit 2026-03-09T21:10:06.184000+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:07.808 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:07 vm07 bash[20771]: audit 2026-03-09T21:10:06.519557+0000 mgr.y (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:07.809 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:07 vm07 bash[20771]: audit 2026-03-09T21:10:06.519557+0000 mgr.y (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:07.809 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:07 vm07 bash[20771]: cluster 2026-03-09T21:10:06.662102+0000 mon.a (mon.0) 60 : cluster [DBG] mgrmap e8: y(active, since 2s) 2026-03-09T21:10:07.809 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:07 vm07 bash[20771]: cluster 2026-03-09T21:10:06.662102+0000 mon.a (mon.0) 60 : cluster [DBG] mgrmap e8: y(active, since 2s) 2026-03-09T21:10:07.809 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:07 vm07 bash[20771]: audit 2026-03-09T21:10:06.769207+0000 mgr.y (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm07", "addr": "192.168.123.107", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:07.809 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:07 vm07 bash[20771]: audit 2026-03-09T21:10:06.769207+0000 mgr.y (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm07", "addr": "192.168.123.107", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:08.899 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout Added host 'vm07' with addr '192.168.123.107' 2026-03-09T21:10:08.899 INFO:teuthology.orchestra.run.vm07.stdout:Deploying unmanaged mon service... 2026-03-09T21:10:09.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:08 vm07 bash[20771]: cephadm 2026-03-09T21:10:07.439659+0000 mgr.y (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm07 2026-03-09T21:10:09.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:08 vm07 bash[20771]: cephadm 2026-03-09T21:10:07.439659+0000 mgr.y (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm07 2026-03-09T21:10:09.168 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-09T21:10:09.168 INFO:teuthology.orchestra.run.vm07.stdout:Deploying unmanaged mgr service... 2026-03-09T21:10:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-09T21:10:10.095 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:09 vm07 bash[20771]: audit 2026-03-09T21:10:08.835036+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:10.095 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:09 vm07 bash[20771]: audit 2026-03-09T21:10:08.835036+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:10.095 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:09 vm07 bash[20771]: cephadm 2026-03-09T21:10:08.836007+0000 mgr.y (mgr.14118) 16 : cephadm [INF] Added host vm07 2026-03-09T21:10:10.095 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:09 vm07 bash[20771]: cephadm 2026-03-09T21:10:08.836007+0000 mgr.y (mgr.14118) 16 : cephadm [INF] Added host vm07 2026-03-09T21:10:10.095 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:09 vm07 bash[20771]: audit 2026-03-09T21:10:08.836365+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:10.095 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:09 vm07 bash[20771]: audit 2026-03-09T21:10:08.836365+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:10.095 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:09 vm07 bash[20771]: audit 2026-03-09T21:10:09.126387+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:10.095 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:09 vm07 bash[20771]: audit 2026-03-09T21:10:09.126387+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:10.095 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:09 vm07 bash[20771]: audit 2026-03-09T21:10:09.394945+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:10.095 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:09 vm07 bash[20771]: audit 2026-03-09T21:10:09.394945+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:10.095 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:09 vm07 bash[20771]: audit 2026-03-09T21:10:09.639032+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.107:0/767791357' entity='client.admin' 2026-03-09T21:10:10.095 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:09 vm07 bash[20771]: audit 2026-03-09T21:10:09.639032+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.107:0/767791357' entity='client.admin' 2026-03-09T21:10:10.140 INFO:teuthology.orchestra.run.vm07.stdout:Enabling the dashboard module... 2026-03-09T21:10:11.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:11 vm07 bash[20771]: audit 2026-03-09T21:10:09.122701+0000 mgr.y (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:11.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:11 vm07 bash[20771]: audit 2026-03-09T21:10:09.122701+0000 mgr.y (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:11.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:11 vm07 bash[20771]: cephadm 2026-03-09T21:10:09.123590+0000 mgr.y (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T21:10:11.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:11 vm07 bash[20771]: cephadm 2026-03-09T21:10:09.123590+0000 mgr.y (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T21:10:11.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:11 vm07 bash[20771]: audit 2026-03-09T21:10:09.391162+0000 mgr.y (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:11.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:11 vm07 bash[20771]: audit 2026-03-09T21:10:09.391162+0000 mgr.y (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:11.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:11 vm07 bash[20771]: cephadm 2026-03-09T21:10:09.391891+0000 mgr.y (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T21:10:11.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:11 vm07 bash[20771]: cephadm 2026-03-09T21:10:09.391891+0000 mgr.y (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T21:10:11.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:11 vm07 bash[20771]: audit 2026-03-09T21:10:10.078303+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.107:0/1146735487' entity='client.admin' 2026-03-09T21:10:11.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:11 vm07 bash[20771]: audit 2026-03-09T21:10:10.078303+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.107:0/1146735487' entity='client.admin' 2026-03-09T21:10:11.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:11 vm07 bash[20771]: audit 2026-03-09T21:10:10.317109+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:11.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:11 vm07 bash[20771]: audit 2026-03-09T21:10:10.317109+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:11.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:11 vm07 bash[20771]: audit 2026-03-09T21:10:10.452588+0000 mon.a (mon.0) 68 : audit [INF] from='client.? 192.168.123.107:0/4237981629' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T21:10:11.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:11 vm07 bash[20771]: audit 2026-03-09T21:10:10.452588+0000 mon.a (mon.0) 68 : audit [INF] from='client.? 192.168.123.107:0/4237981629' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T21:10:11.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:11 vm07 bash[20771]: audit 2026-03-09T21:10:10.600723+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:11.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:11 vm07 bash[20771]: audit 2026-03-09T21:10:10.600723+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:11.722 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:11 vm07 bash[21040]: ignoring --setuser ceph since I am not root 2026-03-09T21:10:11.722 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:11 vm07 bash[21040]: ignoring --setgroup ceph since I am not root 2026-03-09T21:10:11.722 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:11 vm07 bash[21040]: debug 2026-03-09T21:10:11.555+0000 7fc5b1a11140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T21:10:11.722 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:11 vm07 bash[21040]: debug 2026-03-09T21:10:11.595+0000 7fc5b1a11140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T21:10:11.814 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout { 2026-03-09T21:10:11.814 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 9, 2026-03-09T21:10:11.814 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T21:10:11.814 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "active_name": "y", 2026-03-09T21:10:11.814 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-09T21:10:11.814 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout } 2026-03-09T21:10:11.814 INFO:teuthology.orchestra.run.vm07.stdout:Waiting for the mgr to restart... 2026-03-09T21:10:11.814 INFO:teuthology.orchestra.run.vm07.stdout:Waiting for mgr epoch 9... 2026-03-09T21:10:12.041 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:11 vm07 bash[21040]: debug 2026-03-09T21:10:11.715+0000 7fc5b1a11140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T21:10:12.365 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:12 vm07 bash[21040]: debug 2026-03-09T21:10:12.035+0000 7fc5b1a11140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T21:10:12.686 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:12 vm07 bash[21040]: debug 2026-03-09T21:10:12.479+0000 7fc5b1a11140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T21:10:12.686 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:12 vm07 bash[21040]: debug 2026-03-09T21:10:12.559+0000 7fc5b1a11140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T21:10:12.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:12 vm07 bash[20771]: audit 2026-03-09T21:10:11.406718+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.107:0/4237981629' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T21:10:12.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:12 vm07 bash[20771]: audit 2026-03-09T21:10:11.406718+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.107:0/4237981629' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T21:10:12.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:12 vm07 bash[20771]: cluster 2026-03-09T21:10:11.413262+0000 mon.a (mon.0) 71 : cluster [DBG] mgrmap e9: y(active, since 7s) 2026-03-09T21:10:12.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:12 vm07 bash[20771]: cluster 2026-03-09T21:10:11.413262+0000 mon.a (mon.0) 71 : cluster [DBG] mgrmap e9: y(active, since 7s) 2026-03-09T21:10:12.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:12 vm07 bash[20771]: audit 2026-03-09T21:10:11.763406+0000 mon.a (mon.0) 72 : audit [DBG] from='client.? 192.168.123.107:0/4131694319' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T21:10:12.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:12 vm07 bash[20771]: audit 2026-03-09T21:10:11.763406+0000 mon.a (mon.0) 72 : audit [DBG] from='client.? 192.168.123.107:0/4131694319' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T21:10:12.941 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:12 vm07 bash[21040]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T21:10:12.941 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:12 vm07 bash[21040]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T21:10:12.941 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:12 vm07 bash[21040]: from numpy import show_config as show_numpy_config 2026-03-09T21:10:12.941 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:12 vm07 bash[21040]: debug 2026-03-09T21:10:12.687+0000 7fc5b1a11140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T21:10:12.941 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:12 vm07 bash[21040]: debug 2026-03-09T21:10:12.819+0000 7fc5b1a11140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T21:10:12.941 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:12 vm07 bash[21040]: debug 2026-03-09T21:10:12.859+0000 7fc5b1a11140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T21:10:12.941 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:12 vm07 bash[21040]: debug 2026-03-09T21:10:12.895+0000 7fc5b1a11140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T21:10:13.366 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:12 vm07 bash[21040]: debug 2026-03-09T21:10:12.935+0000 7fc5b1a11140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T21:10:13.366 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:12 vm07 bash[21040]: debug 2026-03-09T21:10:12.983+0000 7fc5b1a11140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T21:10:13.666 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:13 vm07 bash[21040]: debug 2026-03-09T21:10:13.395+0000 7fc5b1a11140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T21:10:13.666 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:13 vm07 bash[21040]: debug 2026-03-09T21:10:13.427+0000 7fc5b1a11140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T21:10:13.666 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:13 vm07 bash[21040]: debug 2026-03-09T21:10:13.459+0000 7fc5b1a11140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T21:10:13.666 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:13 vm07 bash[21040]: debug 2026-03-09T21:10:13.587+0000 7fc5b1a11140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T21:10:13.666 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:13 vm07 bash[21040]: debug 2026-03-09T21:10:13.623+0000 7fc5b1a11140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T21:10:13.921 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:13 vm07 bash[21040]: debug 2026-03-09T21:10:13.663+0000 7fc5b1a11140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T21:10:13.921 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:13 vm07 bash[21040]: debug 2026-03-09T21:10:13.771+0000 7fc5b1a11140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T21:10:14.315 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:13 vm07 bash[21040]: debug 2026-03-09T21:10:13.915+0000 7fc5b1a11140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T21:10:14.315 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:14 vm07 bash[21040]: debug 2026-03-09T21:10:14.083+0000 7fc5b1a11140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T21:10:14.315 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:14 vm07 bash[21040]: debug 2026-03-09T21:10:14.115+0000 7fc5b1a11140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T21:10:14.315 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:14 vm07 bash[21040]: debug 2026-03-09T21:10:14.159+0000 7fc5b1a11140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T21:10:14.610 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:14 vm07 bash[21040]: debug 2026-03-09T21:10:14.311+0000 7fc5b1a11140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T21:10:14.610 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:14 vm07 bash[21040]: debug 2026-03-09T21:10:14.555+0000 7fc5b1a11140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T21:10:14.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:14 vm07 bash[20771]: cluster 2026-03-09T21:10:14.562105+0000 mon.a (mon.0) 73 : cluster [INF] Active manager daemon y restarted 2026-03-09T21:10:14.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:14 vm07 bash[20771]: cluster 2026-03-09T21:10:14.562105+0000 mon.a (mon.0) 73 : cluster [INF] Active manager daemon y restarted 2026-03-09T21:10:14.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:14 vm07 bash[20771]: cluster 2026-03-09T21:10:14.562677+0000 mon.a (mon.0) 74 : cluster [INF] Activating manager daemon y 2026-03-09T21:10:14.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:14 vm07 bash[20771]: cluster 2026-03-09T21:10:14.562677+0000 mon.a (mon.0) 74 : cluster [INF] Activating manager daemon y 2026-03-09T21:10:14.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:14 vm07 bash[20771]: cluster 2026-03-09T21:10:14.573746+0000 mon.a (mon.0) 75 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T21:10:14.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:14 vm07 bash[20771]: cluster 2026-03-09T21:10:14.573746+0000 mon.a (mon.0) 75 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T21:10:14.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:14 vm07 bash[20771]: cluster 2026-03-09T21:10:14.573925+0000 mon.a (mon.0) 76 : cluster [DBG] mgrmap e10: y(active, starting, since 0.0114195s) 2026-03-09T21:10:14.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:14 vm07 bash[20771]: cluster 2026-03-09T21:10:14.573925+0000 mon.a (mon.0) 76 : cluster [DBG] mgrmap e10: y(active, starting, since 0.0114195s) 2026-03-09T21:10:14.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:14 vm07 bash[20771]: audit 2026-03-09T21:10:14.576261+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:10:14.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:14 vm07 bash[20771]: audit 2026-03-09T21:10:14.576261+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:10:14.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:14 vm07 bash[20771]: audit 2026-03-09T21:10:14.577197+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T21:10:14.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:14 vm07 bash[20771]: audit 2026-03-09T21:10:14.577197+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T21:10:14.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:14 vm07 bash[20771]: audit 2026-03-09T21:10:14.577757+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T21:10:14.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:14 vm07 bash[20771]: audit 2026-03-09T21:10:14.577757+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T21:10:14.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:14 vm07 bash[20771]: audit 2026-03-09T21:10:14.578101+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T21:10:14.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:14 vm07 bash[20771]: audit 2026-03-09T21:10:14.578101+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T21:10:14.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:14 vm07 bash[20771]: audit 2026-03-09T21:10:14.578450+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T21:10:14.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:14 vm07 bash[20771]: audit 2026-03-09T21:10:14.578450+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T21:10:14.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:14 vm07 bash[20771]: cluster 2026-03-09T21:10:14.584337+0000 mon.a (mon.0) 82 : cluster [INF] Manager daemon y is now available 2026-03-09T21:10:14.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:14 vm07 bash[20771]: cluster 2026-03-09T21:10:14.584337+0000 mon.a (mon.0) 82 : cluster [INF] Manager daemon y is now available 2026-03-09T21:10:14.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:14 vm07 bash[20771]: audit 2026-03-09T21:10:14.601184+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T21:10:14.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:14 vm07 bash[20771]: audit 2026-03-09T21:10:14.601184+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T21:10:14.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:14 vm07 bash[20771]: audit 2026-03-09T21:10:14.602099+0000 mon.a (mon.0) 84 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:14.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:14 vm07 bash[20771]: audit 2026-03-09T21:10:14.602099+0000 mon.a (mon.0) 84 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:14.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:14 vm07 bash[20771]: audit 2026-03-09T21:10:14.602565+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T21:10:14.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:14 vm07 bash[20771]: audit 2026-03-09T21:10:14.602565+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T21:10:15.620 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout { 2026-03-09T21:10:15.620 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 11, 2026-03-09T21:10:15.620 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-09T21:10:15.621 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout } 2026-03-09T21:10:15.621 INFO:teuthology.orchestra.run.vm07.stdout:mgr epoch 9 is available 2026-03-09T21:10:15.621 INFO:teuthology.orchestra.run.vm07.stdout:Generating a dashboard self-signed certificate... 2026-03-09T21:10:15.947 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-09T21:10:15.947 INFO:teuthology.orchestra.run.vm07.stdout:Creating initial admin user... 2026-03-09T21:10:16.356 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$8C9N21cXx23mV6uvhDrKoOK2OprXHDzIkozjm/QqH4OgKXmHdih0u", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773090616, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-09T21:10:16.356 INFO:teuthology.orchestra.run.vm07.stdout:Fetching dashboard port number... 2026-03-09T21:10:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:16 vm07 bash[20771]: cephadm 2026-03-09T21:10:15.559887+0000 mgr.y (mgr.14150) 1 : cephadm [INF] [09/Mar/2026:21:10:15] ENGINE Bus STARTING 2026-03-09T21:10:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:16 vm07 bash[20771]: cephadm 2026-03-09T21:10:15.559887+0000 mgr.y (mgr.14150) 1 : cephadm [INF] [09/Mar/2026:21:10:15] ENGINE Bus STARTING 2026-03-09T21:10:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:16 vm07 bash[20771]: cluster 2026-03-09T21:10:15.574961+0000 mon.a (mon.0) 86 : cluster [DBG] mgrmap e11: y(active, since 1.01246s) 2026-03-09T21:10:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:16 vm07 bash[20771]: cluster 2026-03-09T21:10:15.574961+0000 mon.a (mon.0) 86 : cluster [DBG] mgrmap e11: y(active, since 1.01246s) 2026-03-09T21:10:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:16 vm07 bash[20771]: audit 2026-03-09T21:10:15.576299+0000 mgr.y (mgr.14150) 2 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T21:10:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:16 vm07 bash[20771]: audit 2026-03-09T21:10:15.576299+0000 mgr.y (mgr.14150) 2 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T21:10:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:16 vm07 bash[20771]: audit 2026-03-09T21:10:15.580404+0000 mgr.y (mgr.14150) 3 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T21:10:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:16 vm07 bash[20771]: audit 2026-03-09T21:10:15.580404+0000 mgr.y (mgr.14150) 3 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T21:10:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:16 vm07 bash[20771]: cephadm 2026-03-09T21:10:15.661495+0000 mgr.y (mgr.14150) 4 : cephadm [INF] [09/Mar/2026:21:10:15] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T21:10:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:16 vm07 bash[20771]: cephadm 2026-03-09T21:10:15.661495+0000 mgr.y (mgr.14150) 4 : cephadm [INF] [09/Mar/2026:21:10:15] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T21:10:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:16 vm07 bash[20771]: cephadm 2026-03-09T21:10:15.774840+0000 mgr.y (mgr.14150) 5 : cephadm [INF] [09/Mar/2026:21:10:15] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T21:10:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:16 vm07 bash[20771]: cephadm 2026-03-09T21:10:15.774840+0000 mgr.y (mgr.14150) 5 : cephadm [INF] [09/Mar/2026:21:10:15] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T21:10:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:16 vm07 bash[20771]: cephadm 2026-03-09T21:10:15.774878+0000 mgr.y (mgr.14150) 6 : cephadm [INF] [09/Mar/2026:21:10:15] ENGINE Bus STARTED 2026-03-09T21:10:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:16 vm07 bash[20771]: cephadm 2026-03-09T21:10:15.774878+0000 mgr.y (mgr.14150) 6 : cephadm [INF] [09/Mar/2026:21:10:15] ENGINE Bus STARTED 2026-03-09T21:10:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:16 vm07 bash[20771]: cephadm 2026-03-09T21:10:15.775275+0000 mgr.y (mgr.14150) 7 : cephadm [INF] [09/Mar/2026:21:10:15] ENGINE Client ('192.168.123.107', 35926) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T21:10:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:16 vm07 bash[20771]: cephadm 2026-03-09T21:10:15.775275+0000 mgr.y (mgr.14150) 7 : cephadm [INF] [09/Mar/2026:21:10:15] ENGINE Client ('192.168.123.107', 35926) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T21:10:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:16 vm07 bash[20771]: audit 2026-03-09T21:10:15.853297+0000 mgr.y (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:16 vm07 bash[20771]: audit 2026-03-09T21:10:15.853297+0000 mgr.y (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:16 vm07 bash[20771]: audit 2026-03-09T21:10:15.909328+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:16 vm07 bash[20771]: audit 2026-03-09T21:10:15.909328+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:16 vm07 bash[20771]: audit 2026-03-09T21:10:15.911628+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:16 vm07 bash[20771]: audit 2026-03-09T21:10:15.911628+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:16 vm07 bash[20771]: audit 2026-03-09T21:10:16.167973+0000 mgr.y (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:16 vm07 bash[20771]: audit 2026-03-09T21:10:16.167973+0000 mgr.y (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:16 vm07 bash[20771]: audit 2026-03-09T21:10:16.320369+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:16 vm07 bash[20771]: audit 2026-03-09T21:10:16.320369+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:16.654 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 8443 2026-03-09T21:10:16.654 INFO:teuthology.orchestra.run.vm07.stdout:firewalld does not appear to be present 2026-03-09T21:10:16.654 INFO:teuthology.orchestra.run.vm07.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-09T21:10:16.655 INFO:teuthology.orchestra.run.vm07.stdout:Ceph Dashboard is now available at: 2026-03-09T21:10:16.655 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:10:16.655 INFO:teuthology.orchestra.run.vm07.stdout: URL: https://vm07.local:8443/ 2026-03-09T21:10:16.655 INFO:teuthology.orchestra.run.vm07.stdout: User: admin 2026-03-09T21:10:16.655 INFO:teuthology.orchestra.run.vm07.stdout: Password: kdkx57jhk6 2026-03-09T21:10:16.655 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:10:16.656 INFO:teuthology.orchestra.run.vm07.stdout:Saving cluster configuration to /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config directory 2026-03-09T21:10:16.953 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-09T21:10:16.953 INFO:teuthology.orchestra.run.vm07.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-09T21:10:16.953 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:10:16.953 INFO:teuthology.orchestra.run.vm07.stdout: sudo /home/ubuntu/cephtest/cephadm shell --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-09T21:10:16.953 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:10:16.953 INFO:teuthology.orchestra.run.vm07.stdout:Or, if you are only running a single cluster on this host: 2026-03-09T21:10:16.953 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:10:16.953 INFO:teuthology.orchestra.run.vm07.stdout: sudo /home/ubuntu/cephtest/cephadm shell 2026-03-09T21:10:16.953 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:10:16.953 INFO:teuthology.orchestra.run.vm07.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-09T21:10:16.953 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:10:16.953 INFO:teuthology.orchestra.run.vm07.stdout: ceph telemetry on 2026-03-09T21:10:16.953 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:10:16.953 INFO:teuthology.orchestra.run.vm07.stdout:For more information see: 2026-03-09T21:10:16.953 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:10:16.953 INFO:teuthology.orchestra.run.vm07.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-09T21:10:16.953 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:10:16.953 INFO:teuthology.orchestra.run.vm07.stdout:Bootstrap complete. 2026-03-09T21:10:16.977 INFO:tasks.cephadm:Fetching config... 2026-03-09T21:10:16.977 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T21:10:16.977 DEBUG:teuthology.orchestra.run.vm07:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-09T21:10:16.980 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-09T21:10:16.980 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T21:10:16.980 DEBUG:teuthology.orchestra.run.vm07:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-09T21:10:17.024 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-09T21:10:17.025 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T21:10:17.025 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.a/keyring of=/dev/stdout 2026-03-09T21:10:17.072 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-09T21:10:17.072 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T21:10:17.072 DEBUG:teuthology.orchestra.run.vm07:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-09T21:10:17.116 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-09T21:10:17.116 DEBUG:teuthology.orchestra.run.vm07:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMiRdo+3e4EGtGa2okT2caRGcbfUa4+9VemqVJhsxYm ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-09T21:10:17.168 INFO:teuthology.orchestra.run.vm07.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMiRdo+3e4EGtGa2okT2caRGcbfUa4+9VemqVJhsxYm ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:10:17.180 DEBUG:teuthology.orchestra.run.vm10:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMiRdo+3e4EGtGa2okT2caRGcbfUa4+9VemqVJhsxYm ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-09T21:10:17.190 INFO:teuthology.orchestra.run.vm10.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMiRdo+3e4EGtGa2okT2caRGcbfUa4+9VemqVJhsxYm ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:10:17.194 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-09T21:10:17.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:17 vm07 bash[20771]: audit 2026-03-09T21:10:16.614711+0000 mon.a (mon.0) 90 : audit [DBG] from='client.? 192.168.123.107:0/707623753' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T21:10:17.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:17 vm07 bash[20771]: audit 2026-03-09T21:10:16.614711+0000 mon.a (mon.0) 90 : audit [DBG] from='client.? 192.168.123.107:0/707623753' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T21:10:17.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:17 vm07 bash[20771]: audit 2026-03-09T21:10:16.914563+0000 mon.a (mon.0) 91 : audit [INF] from='client.? 192.168.123.107:0/2239191560' entity='client.admin' 2026-03-09T21:10:17.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:17 vm07 bash[20771]: audit 2026-03-09T21:10:16.914563+0000 mon.a (mon.0) 91 : audit [INF] from='client.? 192.168.123.107:0/2239191560' entity='client.admin' 2026-03-09T21:10:17.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:17 vm07 bash[20771]: cluster 2026-03-09T21:10:17.324025+0000 mon.a (mon.0) 92 : cluster [DBG] mgrmap e12: y(active, since 2s) 2026-03-09T21:10:17.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:17 vm07 bash[20771]: cluster 2026-03-09T21:10:17.324025+0000 mon.a (mon.0) 92 : cluster [DBG] mgrmap e12: y(active, since 2s) 2026-03-09T21:10:20.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:20 vm07 bash[20771]: audit 2026-03-09T21:10:19.499973+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:20.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:20 vm07 bash[20771]: audit 2026-03-09T21:10:19.499973+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:20.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:20 vm07 bash[20771]: audit 2026-03-09T21:10:20.065450+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:20.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:20 vm07 bash[20771]: audit 2026-03-09T21:10:20.065450+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:21.378 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.a/config 2026-03-09T21:10:21.875 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-09T21:10:21.875 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-09T21:10:22.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:22 vm07 bash[20771]: cluster 2026-03-09T21:10:21.505775+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e13: y(active, since 6s) 2026-03-09T21:10:22.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:22 vm07 bash[20771]: cluster 2026-03-09T21:10:21.505775+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e13: y(active, since 6s) 2026-03-09T21:10:22.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:22 vm07 bash[20771]: audit 2026-03-09T21:10:21.815011+0000 mon.a (mon.0) 96 : audit [INF] from='client.? 192.168.123.107:0/1600342253' entity='client.admin' 2026-03-09T21:10:22.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:22 vm07 bash[20771]: audit 2026-03-09T21:10:21.815011+0000 mon.a (mon.0) 96 : audit [INF] from='client.? 192.168.123.107:0/1600342253' entity='client.admin' 2026-03-09T21:10:26.390 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.a/config 2026-03-09T21:10:26.733 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm10 2026-03-09T21:10:26.733 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-09T21:10:26.733 DEBUG:teuthology.orchestra.run.vm10:> dd of=/etc/ceph/ceph.conf 2026-03-09T21:10:26.736 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-09T21:10:26.736 DEBUG:teuthology.orchestra.run.vm10:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T21:10:26.781 INFO:tasks.cephadm:Adding host vm10 to orchestrator... 2026-03-09T21:10:26.781 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph orch host add vm10 2026-03-09T21:10:27.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:26 vm07 bash[20771]: audit 2026-03-09T21:10:25.746316+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:27.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:26 vm07 bash[20771]: audit 2026-03-09T21:10:25.746316+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:27.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:26 vm07 bash[20771]: audit 2026-03-09T21:10:25.748440+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:27.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:26 vm07 bash[20771]: audit 2026-03-09T21:10:25.748440+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:27.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:26 vm07 bash[20771]: audit 2026-03-09T21:10:25.748996+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:10:27.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:26 vm07 bash[20771]: audit 2026-03-09T21:10:25.748996+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:10:27.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:26 vm07 bash[20771]: audit 2026-03-09T21:10:25.751259+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:27.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:26 vm07 bash[20771]: audit 2026-03-09T21:10:25.751259+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:27.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:26 vm07 bash[20771]: audit 2026-03-09T21:10:25.756122+0000 mon.a (mon.0) 101 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:27.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:26 vm07 bash[20771]: audit 2026-03-09T21:10:25.756122+0000 mon.a (mon.0) 101 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:27.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:26 vm07 bash[20771]: audit 2026-03-09T21:10:25.758809+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:27.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:26 vm07 bash[20771]: audit 2026-03-09T21:10:25.758809+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:27.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:26 vm07 bash[20771]: audit 2026-03-09T21:10:26.648083+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:27.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:26 vm07 bash[20771]: audit 2026-03-09T21:10:26.648083+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:27.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:26 vm07 bash[20771]: audit 2026-03-09T21:10:26.648634+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:27.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:26 vm07 bash[20771]: audit 2026-03-09T21:10:26.648634+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:27.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:26 vm07 bash[20771]: audit 2026-03-09T21:10:26.649464+0000 mon.a (mon.0) 105 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:27.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:26 vm07 bash[20771]: audit 2026-03-09T21:10:26.649464+0000 mon.a (mon.0) 105 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:27.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:26 vm07 bash[20771]: audit 2026-03-09T21:10:26.649842+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:10:27.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:26 vm07 bash[20771]: audit 2026-03-09T21:10:26.649842+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:10:28.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:27 vm07 bash[20771]: audit 2026-03-09T21:10:26.645367+0000 mgr.y (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:28.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:27 vm07 bash[20771]: audit 2026-03-09T21:10:26.645367+0000 mgr.y (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:28.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:27 vm07 bash[20771]: cephadm 2026-03-09T21:10:26.650407+0000 mgr.y (mgr.14150) 11 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-09T21:10:28.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:27 vm07 bash[20771]: cephadm 2026-03-09T21:10:26.650407+0000 mgr.y (mgr.14150) 11 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-09T21:10:28.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:27 vm07 bash[20771]: cephadm 2026-03-09T21:10:26.687382+0000 mgr.y (mgr.14150) 12 : cephadm [INF] Updating vm07:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:10:28.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:27 vm07 bash[20771]: cephadm 2026-03-09T21:10:26.687382+0000 mgr.y (mgr.14150) 12 : cephadm [INF] Updating vm07:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:10:28.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:27 vm07 bash[20771]: cephadm 2026-03-09T21:10:26.729555+0000 mgr.y (mgr.14150) 13 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-09T21:10:28.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:27 vm07 bash[20771]: cephadm 2026-03-09T21:10:26.729555+0000 mgr.y (mgr.14150) 13 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-09T21:10:28.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:27 vm07 bash[20771]: cephadm 2026-03-09T21:10:26.762855+0000 mgr.y (mgr.14150) 14 : cephadm [INF] Updating vm07:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.client.admin.keyring 2026-03-09T21:10:28.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:27 vm07 bash[20771]: cephadm 2026-03-09T21:10:26.762855+0000 mgr.y (mgr.14150) 14 : cephadm [INF] Updating vm07:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.client.admin.keyring 2026-03-09T21:10:28.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:27 vm07 bash[20771]: audit 2026-03-09T21:10:26.798399+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:28.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:27 vm07 bash[20771]: audit 2026-03-09T21:10:26.798399+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:28.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:27 vm07 bash[20771]: audit 2026-03-09T21:10:26.800745+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:28.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:27 vm07 bash[20771]: audit 2026-03-09T21:10:26.800745+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:28.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:27 vm07 bash[20771]: audit 2026-03-09T21:10:26.802820+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:28.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:27 vm07 bash[20771]: audit 2026-03-09T21:10:26.802820+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:30.396 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.a/config 2026-03-09T21:10:31.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:31 vm07 bash[20771]: audit 2026-03-09T21:10:30.702295+0000 mgr.y (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm10", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:31.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:31 vm07 bash[20771]: audit 2026-03-09T21:10:30.702295+0000 mgr.y (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm10", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:31.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:31 vm07 bash[20771]: cephadm 2026-03-09T21:10:31.234829+0000 mgr.y (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm10 2026-03-09T21:10:31.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:31 vm07 bash[20771]: cephadm 2026-03-09T21:10:31.234829+0000 mgr.y (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm10 2026-03-09T21:10:32.537 INFO:teuthology.orchestra.run.vm07.stdout:Added host 'vm10' with addr '192.168.123.110' 2026-03-09T21:10:32.664 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph orch host ls --format=json 2026-03-09T21:10:33.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:33 vm07 bash[20771]: audit 2026-03-09T21:10:32.533862+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:33.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:33 vm07 bash[20771]: audit 2026-03-09T21:10:32.533862+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:33.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:33 vm07 bash[20771]: cephadm 2026-03-09T21:10:32.534333+0000 mgr.y (mgr.14150) 17 : cephadm [INF] Added host vm10 2026-03-09T21:10:33.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:33 vm07 bash[20771]: cephadm 2026-03-09T21:10:32.534333+0000 mgr.y (mgr.14150) 17 : cephadm [INF] Added host vm10 2026-03-09T21:10:33.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:33 vm07 bash[20771]: audit 2026-03-09T21:10:32.534634+0000 mon.a (mon.0) 111 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:33.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:33 vm07 bash[20771]: audit 2026-03-09T21:10:32.534634+0000 mon.a (mon.0) 111 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:33.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:33 vm07 bash[20771]: audit 2026-03-09T21:10:32.805244+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:33.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:33 vm07 bash[20771]: audit 2026-03-09T21:10:32.805244+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:35.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:35 vm07 bash[20771]: audit 2026-03-09T21:10:34.071337+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:35.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:35 vm07 bash[20771]: audit 2026-03-09T21:10:34.071337+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:35.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:35 vm07 bash[20771]: audit 2026-03-09T21:10:34.617399+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:35.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:35 vm07 bash[20771]: audit 2026-03-09T21:10:34.617399+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:36.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:36 vm07 bash[20771]: cluster 2026-03-09T21:10:34.578744+0000 mgr.y (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:36.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:36 vm07 bash[20771]: cluster 2026-03-09T21:10:34.578744+0000 mgr.y (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:37.269 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.a/config 2026-03-09T21:10:37.540 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:10:37.540 INFO:teuthology.orchestra.run.vm07.stdout:[{"addr": "192.168.123.107", "hostname": "vm07", "labels": [], "status": ""}, {"addr": "192.168.123.110", "hostname": "vm10", "labels": [], "status": ""}] 2026-03-09T21:10:37.590 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-09T21:10:37.590 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph osd crush tunables default 2026-03-09T21:10:38.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:38 vm07 bash[20771]: cluster 2026-03-09T21:10:36.578980+0000 mgr.y (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:38.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:38 vm07 bash[20771]: cluster 2026-03-09T21:10:36.578980+0000 mgr.y (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:38.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:38 vm07 bash[20771]: audit 2026-03-09T21:10:37.333703+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:38.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:38 vm07 bash[20771]: audit 2026-03-09T21:10:37.333703+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:38.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:38 vm07 bash[20771]: audit 2026-03-09T21:10:37.335727+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:38.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:38 vm07 bash[20771]: audit 2026-03-09T21:10:37.335727+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:38.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:38 vm07 bash[20771]: audit 2026-03-09T21:10:37.338446+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:38.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:38 vm07 bash[20771]: audit 2026-03-09T21:10:37.338446+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:38.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:38 vm07 bash[20771]: audit 2026-03-09T21:10:37.340311+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:38.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:38 vm07 bash[20771]: audit 2026-03-09T21:10:37.340311+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:38.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:38 vm07 bash[20771]: audit 2026-03-09T21:10:37.340845+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:10:38.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:38 vm07 bash[20771]: audit 2026-03-09T21:10:37.340845+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:10:38.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:38 vm07 bash[20771]: audit 2026-03-09T21:10:37.341505+0000 mon.a (mon.0) 120 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:38.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:38 vm07 bash[20771]: audit 2026-03-09T21:10:37.341505+0000 mon.a (mon.0) 120 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:38.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:38 vm07 bash[20771]: audit 2026-03-09T21:10:37.342054+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:10:38.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:38 vm07 bash[20771]: audit 2026-03-09T21:10:37.342054+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:10:38.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:38 vm07 bash[20771]: cephadm 2026-03-09T21:10:37.342670+0000 mgr.y (mgr.14150) 20 : cephadm [INF] Updating vm10:/etc/ceph/ceph.conf 2026-03-09T21:10:38.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:38 vm07 bash[20771]: cephadm 2026-03-09T21:10:37.342670+0000 mgr.y (mgr.14150) 20 : cephadm [INF] Updating vm10:/etc/ceph/ceph.conf 2026-03-09T21:10:38.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:38 vm07 bash[20771]: cephadm 2026-03-09T21:10:37.383096+0000 mgr.y (mgr.14150) 21 : cephadm [INF] Updating vm10:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:10:38.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:38 vm07 bash[20771]: cephadm 2026-03-09T21:10:37.383096+0000 mgr.y (mgr.14150) 21 : cephadm [INF] Updating vm10:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:10:38.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:38 vm07 bash[20771]: cephadm 2026-03-09T21:10:37.412681+0000 mgr.y (mgr.14150) 22 : cephadm [INF] Updating vm10:/etc/ceph/ceph.client.admin.keyring 2026-03-09T21:10:38.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:38 vm07 bash[20771]: cephadm 2026-03-09T21:10:37.412681+0000 mgr.y (mgr.14150) 22 : cephadm [INF] Updating vm10:/etc/ceph/ceph.client.admin.keyring 2026-03-09T21:10:38.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:38 vm07 bash[20771]: cephadm 2026-03-09T21:10:37.445862+0000 mgr.y (mgr.14150) 23 : cephadm [INF] Updating vm10:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.client.admin.keyring 2026-03-09T21:10:38.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:38 vm07 bash[20771]: cephadm 2026-03-09T21:10:37.445862+0000 mgr.y (mgr.14150) 23 : cephadm [INF] Updating vm10:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.client.admin.keyring 2026-03-09T21:10:38.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:38 vm07 bash[20771]: audit 2026-03-09T21:10:37.475748+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:38.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:38 vm07 bash[20771]: audit 2026-03-09T21:10:37.475748+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:38.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:38 vm07 bash[20771]: audit 2026-03-09T21:10:37.478214+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:38.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:38 vm07 bash[20771]: audit 2026-03-09T21:10:37.478214+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:38.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:38 vm07 bash[20771]: audit 2026-03-09T21:10:37.480366+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:38.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:38 vm07 bash[20771]: audit 2026-03-09T21:10:37.480366+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:39.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:39 vm07 bash[20771]: audit 2026-03-09T21:10:37.539477+0000 mgr.y (mgr.14150) 24 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T21:10:39.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:39 vm07 bash[20771]: audit 2026-03-09T21:10:37.539477+0000 mgr.y (mgr.14150) 24 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T21:10:40.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:40 vm07 bash[20771]: cluster 2026-03-09T21:10:38.579157+0000 mgr.y (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:40.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:40 vm07 bash[20771]: cluster 2026-03-09T21:10:38.579157+0000 mgr.y (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:41.277 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.a/config 2026-03-09T21:10:42.377 INFO:teuthology.orchestra.run.vm07.stderr:adjusted tunables profile to default 2026-03-09T21:10:42.441 INFO:tasks.cephadm:Adding mon.a on vm07 2026-03-09T21:10:42.441 INFO:tasks.cephadm:Adding mon.c on vm07 2026-03-09T21:10:42.441 INFO:tasks.cephadm:Adding mon.b on vm10 2026-03-09T21:10:42.441 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph orch apply mon '3;vm07:192.168.123.107=a;vm07:[v2:192.168.123.107:3301,v1:192.168.123.107:6790]=c;vm10:192.168.123.110=b' 2026-03-09T21:10:42.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:42 vm07 bash[20771]: cluster 2026-03-09T21:10:40.579297+0000 mgr.y (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:42.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:42 vm07 bash[20771]: cluster 2026-03-09T21:10:40.579297+0000 mgr.y (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:42.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:42 vm07 bash[20771]: audit 2026-03-09T21:10:41.602715+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.107:0/2472449452' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T21:10:42.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:42 vm07 bash[20771]: audit 2026-03-09T21:10:41.602715+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.107:0/2472449452' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T21:10:43.553 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:10:43.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:43 vm07 bash[20771]: audit 2026-03-09T21:10:42.376403+0000 mon.a (mon.0) 126 : audit [INF] from='client.? 192.168.123.107:0/2472449452' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T21:10:43.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:43 vm07 bash[20771]: audit 2026-03-09T21:10:42.376403+0000 mon.a (mon.0) 126 : audit [INF] from='client.? 192.168.123.107:0/2472449452' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T21:10:43.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:43 vm07 bash[20771]: cluster 2026-03-09T21:10:42.377918+0000 mon.a (mon.0) 127 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T21:10:43.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:43 vm07 bash[20771]: cluster 2026-03-09T21:10:42.377918+0000 mon.a (mon.0) 127 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T21:10:43.916 INFO:teuthology.orchestra.run.vm10.stdout:Scheduled mon update... 2026-03-09T21:10:43.977 DEBUG:teuthology.orchestra.run.vm07:mon.c> sudo journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@mon.c.service 2026-03-09T21:10:43.978 DEBUG:teuthology.orchestra.run.vm10:mon.b> sudo journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@mon.b.service 2026-03-09T21:10:43.979 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-09T21:10:43.979 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph mon dump -f json 2026-03-09T21:10:44.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:44 vm07 bash[20771]: cluster 2026-03-09T21:10:42.579516+0000 mgr.y (mgr.14150) 27 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:44.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:44 vm07 bash[20771]: cluster 2026-03-09T21:10:42.579516+0000 mgr.y (mgr.14150) 27 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:44.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:44 vm07 bash[20771]: audit 2026-03-09T21:10:43.915748+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:44.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:44 vm07 bash[20771]: audit 2026-03-09T21:10:43.915748+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:44.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:44 vm07 bash[20771]: audit 2026-03-09T21:10:43.916282+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:44.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:44 vm07 bash[20771]: audit 2026-03-09T21:10:43.916282+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:44.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:44 vm07 bash[20771]: audit 2026-03-09T21:10:43.917295+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:44.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:44 vm07 bash[20771]: audit 2026-03-09T21:10:43.917295+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:44.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:44 vm07 bash[20771]: audit 2026-03-09T21:10:43.917727+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:10:44.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:44 vm07 bash[20771]: audit 2026-03-09T21:10:43.917727+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:10:44.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:44 vm07 bash[20771]: audit 2026-03-09T21:10:43.920912+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:44.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:44 vm07 bash[20771]: audit 2026-03-09T21:10:43.920912+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:44.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:44 vm07 bash[20771]: audit 2026-03-09T21:10:43.921956+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T21:10:44.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:44 vm07 bash[20771]: audit 2026-03-09T21:10:43.921956+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T21:10:44.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:44 vm07 bash[20771]: audit 2026-03-09T21:10:43.922332+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:44.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:44 vm07 bash[20771]: audit 2026-03-09T21:10:43.922332+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:45.132 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.b/config 2026-03-09T21:10:45.466 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: debug 2026-03-09T21:10:45.464+0000 7f974ae7f640 0 mon.b@-1(synchronizing).mds e1 new map 2026-03-09T21:10:45.643 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:45 vm07 bash[20771]: audit 2026-03-09T21:10:43.912233+0000 mgr.y (mgr.14150) 28 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm07:192.168.123.107=a;vm07:[v2:192.168.123.107:3301,v1:192.168.123.107:6790]=c;vm10:192.168.123.110=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:45.643 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:45 vm07 bash[20771]: audit 2026-03-09T21:10:43.912233+0000 mgr.y (mgr.14150) 28 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm07:192.168.123.107=a;vm07:[v2:192.168.123.107:3301,v1:192.168.123.107:6790]=c;vm10:192.168.123.110=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:45.643 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:45 vm07 bash[20771]: cephadm 2026-03-09T21:10:43.913356+0000 mgr.y (mgr.14150) 29 : cephadm [INF] Saving service mon spec with placement vm07:192.168.123.107=a;vm07:[v2:192.168.123.107:3301,v1:192.168.123.107:6790]=c;vm10:192.168.123.110=b;count:3 2026-03-09T21:10:45.643 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:45 vm07 bash[20771]: cephadm 2026-03-09T21:10:43.913356+0000 mgr.y (mgr.14150) 29 : cephadm [INF] Saving service mon spec with placement vm07:192.168.123.107=a;vm07:[v2:192.168.123.107:3301,v1:192.168.123.107:6790]=c;vm10:192.168.123.110=b;count:3 2026-03-09T21:10:45.643 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:45 vm07 bash[20771]: cephadm 2026-03-09T21:10:43.922825+0000 mgr.y (mgr.14150) 30 : cephadm [INF] Deploying daemon mon.b on vm10 2026-03-09T21:10:45.643 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:45 vm07 bash[20771]: cephadm 2026-03-09T21:10:43.922825+0000 mgr.y (mgr.14150) 30 : cephadm [INF] Deploying daemon mon.b on vm10 2026-03-09T21:10:45.643 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:45 vm07 bash[20771]: audit 2026-03-09T21:10:45.305074+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.643 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:45 vm07 bash[20771]: audit 2026-03-09T21:10:45.305074+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.643 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:45 vm07 bash[20771]: audit 2026-03-09T21:10:45.307036+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.643 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:45 vm07 bash[20771]: audit 2026-03-09T21:10:45.307036+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.643 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:45 vm07 bash[20771]: audit 2026-03-09T21:10:45.309097+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.643 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:45 vm07 bash[20771]: audit 2026-03-09T21:10:45.309097+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.643 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:45 vm07 bash[20771]: audit 2026-03-09T21:10:45.309460+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T21:10:45.643 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:45 vm07 bash[20771]: audit 2026-03-09T21:10:45.309460+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T21:10:45.643 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:45 vm07 bash[20771]: audit 2026-03-09T21:10:45.309899+0000 mon.a (mon.0) 139 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:45.643 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:45 vm07 bash[20771]: audit 2026-03-09T21:10:45.309899+0000 mon.a (mon.0) 139 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:45.945 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: debug 2026-03-09T21:10:45.464+0000 7f974ae7f640 0 mon.b@-1(synchronizing).mds e1 print_map 2026-03-09T21:10:45.945 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: e1 2026-03-09T21:10:45.945 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: btime 2026-03-09T21:09:52:806787+0000 2026-03-09T21:10:45.945 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: legacy client fscid: -1 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: No filesystems configured 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: debug 2026-03-09T21:10:45.464+0000 7f974ae7f640 1 mon.b@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: debug 2026-03-09T21:10:45.464+0000 7f974ae7f640 1 mon.b@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: debug 2026-03-09T21:10:45.464+0000 7f974ae7f640 1 mon.b@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: debug 2026-03-09T21:10:45.464+0000 7f974ae7f640 1 mon.b@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: debug 2026-03-09T21:10:45.464+0000 7f974ae7f640 1 mon.b@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: debug 2026-03-09T21:10:45.464+0000 7f974ae7f640 1 mon.b@-1(synchronizing).osd e4 e4: 0 total, 0 up, 0 in 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: debug 2026-03-09T21:10:45.464+0000 7f974ae7f640 0 mon.b@-1(synchronizing).osd e4 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: debug 2026-03-09T21:10:45.464+0000 7f974ae7f640 0 mon.b@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: debug 2026-03-09T21:10:45.464+0000 7f974ae7f640 0 mon.b@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: debug 2026-03-09T21:10:45.464+0000 7f974ae7f640 0 mon.b@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:52.807752+0000 mon.a (mon.0) 0 : cluster [INF] mkfs 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:52.807752+0000 mon.a (mon.0) 0 : cluster [INF] mkfs 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:52.800082+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:52.800082+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:53.923644+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:53.923644+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:53.923675+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:53.923675+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:53.923679+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:53.923679+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:53.923682+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-09T21:09:51.643158+0000 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:53.923682+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-09T21:09:51.643158+0000 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:53.923687+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-09T21:09:51.643158+0000 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:53.923687+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-09T21:09:51.643158+0000 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:53.923689+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:53.923689+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:53.923692+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:53.923692+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:53.923695+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:53.923695+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:53.923918+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:53.923918+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:53.923927+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:53.923927+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-09T21:10:45.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:53.924333+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:53.924333+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:09:53.993001+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.107:0/1076973491' entity='client.admin' 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:09:53.993001+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.107:0/1076973491' entity='client.admin' 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:09:54.568490+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.107:0/2876237503' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:09:54.568490+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.107:0/2876237503' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:09:56.811941+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.107:0/742307409' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:09:56.811941+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.107:0/742307409' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:57.800793+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon y 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:57.800793+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon y 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:57.806245+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: y(active, starting, since 0.00553332s) 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:57.806245+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: y(active, starting, since 0.00553332s) 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:09:57.808939+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:09:57.808939+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:09:57.809181+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:09:57.809181+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:09:57.809414+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:09:57.809414+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:09:57.810630+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:09:57.810630+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:09:57.810741+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:09:57.810741+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:57.819743+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon y is now available 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:57.819743+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon y is now available 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:09:57.834173+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:09:57.834173+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:09:57.838522+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:09:57.838522+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:09:57.840593+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:09:57.840593+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:09:57.842931+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:09:57.842931+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:09:57.849054+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:09:57.849054+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:58.810845+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: y(active, since 1.01013s) 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:09:58.810845+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: y(active, since 1.01013s) 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:09:59.536915+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.107:0/1267886844' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:09:59.536915+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.107:0/1267886844' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:09:59.781874+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.107:0/4264837363' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:09:59.781874+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.107:0/4264837363' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:00.069310+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:00.069310+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:00.331092+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.107:0/585480375' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T21:10:45.947 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:00.331092+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.107:0/585480375' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:01.072160+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.107:0/585480375' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:01.072160+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.107:0/585480375' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:01.074102+0000 mon.a (mon.0) 34 : cluster [DBG] mgrmap e5: y(active, since 3s) 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:01.074102+0000 mon.a (mon.0) 34 : cluster [DBG] mgrmap e5: y(active, since 3s) 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:01.385753+0000 mon.a (mon.0) 35 : audit [DBG] from='client.? 192.168.123.107:0/558367829' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:01.385753+0000 mon.a (mon.0) 35 : audit [DBG] from='client.? 192.168.123.107:0/558367829' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:04.153207+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon y restarted 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:04.153207+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon y restarted 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:04.153406+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon y 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:04.153406+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon y 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:04.157744+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:04.157744+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:04.158272+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e6: y(active, starting, since 0.00494476s) 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:04.158272+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e6: y(active, starting, since 0.00494476s) 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:04.161061+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:04.161061+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:04.161905+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:04.161905+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:04.162780+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:04.162780+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:04.162975+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:04.162975+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:04.163195+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:04.163195+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:04.169280+0000 mon.a (mon.0) 45 : cluster [INF] Manager daemon y is now available 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:04.169280+0000 mon.a (mon.0) 45 : cluster [INF] Manager daemon y is now available 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:04.177769+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:04.177769+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:04.180478+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:04.180478+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:04.194067+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:04.194067+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:04.194397+0000 mon.a (mon.0) 49 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:04.194397+0000 mon.a (mon.0) 49 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:04.195517+0000 mon.a (mon.0) 50 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:04.195517+0000 mon.a (mon.0) 50 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:04.196285+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:04.196285+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:04.175500+0000 mgr.y (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:04.175500+0000 mgr.y (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-09T21:10:45.948 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:04.657478+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:04.657478+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:04.659800+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:04.659800+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:05.167121+0000 mon.a (mon.0) 54 : cluster [DBG] mgrmap e7: y(active, since 1.01379s) 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:05.167121+0000 mon.a (mon.0) 54 : cluster [DBG] mgrmap e7: y(active, since 1.01379s) 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:05.419972+0000 mon.a (mon.0) 55 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:05.419972+0000 mon.a (mon.0) 55 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:05.509378+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:05.509378+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:05.514619+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:05.514619+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:05.160489+0000 mgr.y (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:05.160489+0000 mgr.y (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:05.164877+0000 mgr.y (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:05.164877+0000 mgr.y (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:05.205334+0000 mgr.y (mgr.14118) 4 : cephadm [INF] [09/Mar/2026:21:10:05] ENGINE Bus STARTING 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:05.205334+0000 mgr.y (mgr.14118) 4 : cephadm [INF] [09/Mar/2026:21:10:05] ENGINE Bus STARTING 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:05.306952+0000 mgr.y (mgr.14118) 5 : cephadm [INF] [09/Mar/2026:21:10:05] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:05.306952+0000 mgr.y (mgr.14118) 5 : cephadm [INF] [09/Mar/2026:21:10:05] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:05.419250+0000 mgr.y (mgr.14118) 6 : cephadm [INF] [09/Mar/2026:21:10:05] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:05.419250+0000 mgr.y (mgr.14118) 6 : cephadm [INF] [09/Mar/2026:21:10:05] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:05.419296+0000 mgr.y (mgr.14118) 7 : cephadm [INF] [09/Mar/2026:21:10:05] ENGINE Bus STARTED 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:05.419296+0000 mgr.y (mgr.14118) 7 : cephadm [INF] [09/Mar/2026:21:10:05] ENGINE Bus STARTED 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:05.419828+0000 mgr.y (mgr.14118) 8 : cephadm [INF] [09/Mar/2026:21:10:05] ENGINE Client ('192.168.123.107', 51620) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:05.419828+0000 mgr.y (mgr.14118) 8 : cephadm [INF] [09/Mar/2026:21:10:05] ENGINE Client ('192.168.123.107', 51620) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:05.505590+0000 mgr.y (mgr.14118) 9 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:05.505590+0000 mgr.y (mgr.14118) 9 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:05.762325+0000 mgr.y (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:05.762325+0000 mgr.y (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:06.002761+0000 mgr.y (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:06.002761+0000 mgr.y (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:06.003001+0000 mgr.y (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:06.003001+0000 mgr.y (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:06.165888+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:06.165888+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:06.184000+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:06.184000+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:06.519557+0000 mgr.y (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:06.519557+0000 mgr.y (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:45.949 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:06.662102+0000 mon.a (mon.0) 60 : cluster [DBG] mgrmap e8: y(active, since 2s) 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:06.662102+0000 mon.a (mon.0) 60 : cluster [DBG] mgrmap e8: y(active, since 2s) 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:06.769207+0000 mgr.y (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm07", "addr": "192.168.123.107", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:06.769207+0000 mgr.y (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm07", "addr": "192.168.123.107", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:07.439659+0000 mgr.y (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm07 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:07.439659+0000 mgr.y (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm07 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:08.835036+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:08.835036+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:08.836007+0000 mgr.y (mgr.14118) 16 : cephadm [INF] Added host vm07 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:08.836007+0000 mgr.y (mgr.14118) 16 : cephadm [INF] Added host vm07 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:08.836365+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:08.836365+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:09.126387+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:09.126387+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:09.394945+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:09.394945+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:09.639032+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.107:0/767791357' entity='client.admin' 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:09.639032+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.107:0/767791357' entity='client.admin' 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:09.122701+0000 mgr.y (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:09.122701+0000 mgr.y (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:09.123590+0000 mgr.y (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:09.123590+0000 mgr.y (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:09.391162+0000 mgr.y (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:09.391162+0000 mgr.y (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:09.391891+0000 mgr.y (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:09.391891+0000 mgr.y (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:10.078303+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.107:0/1146735487' entity='client.admin' 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:10.078303+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.107:0/1146735487' entity='client.admin' 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:10.317109+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:10.317109+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:10.452588+0000 mon.a (mon.0) 68 : audit [INF] from='client.? 192.168.123.107:0/4237981629' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:10.452588+0000 mon.a (mon.0) 68 : audit [INF] from='client.? 192.168.123.107:0/4237981629' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:10.600723+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:10.600723+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:11.406718+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.107:0/4237981629' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:11.406718+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.107:0/4237981629' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:11.413262+0000 mon.a (mon.0) 71 : cluster [DBG] mgrmap e9: y(active, since 7s) 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:11.413262+0000 mon.a (mon.0) 71 : cluster [DBG] mgrmap e9: y(active, since 7s) 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:11.763406+0000 mon.a (mon.0) 72 : audit [DBG] from='client.? 192.168.123.107:0/4131694319' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:11.763406+0000 mon.a (mon.0) 72 : audit [DBG] from='client.? 192.168.123.107:0/4131694319' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:14.562105+0000 mon.a (mon.0) 73 : cluster [INF] Active manager daemon y restarted 2026-03-09T21:10:45.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:14.562105+0000 mon.a (mon.0) 73 : cluster [INF] Active manager daemon y restarted 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:14.562677+0000 mon.a (mon.0) 74 : cluster [INF] Activating manager daemon y 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:14.562677+0000 mon.a (mon.0) 74 : cluster [INF] Activating manager daemon y 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:14.573746+0000 mon.a (mon.0) 75 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:14.573746+0000 mon.a (mon.0) 75 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:14.573925+0000 mon.a (mon.0) 76 : cluster [DBG] mgrmap e10: y(active, starting, since 0.0114195s) 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:14.573925+0000 mon.a (mon.0) 76 : cluster [DBG] mgrmap e10: y(active, starting, since 0.0114195s) 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:14.576261+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:14.576261+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:14.577197+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:14.577197+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:14.577757+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:14.577757+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:14.578101+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:14.578101+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:14.578450+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:14.578450+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:14.584337+0000 mon.a (mon.0) 82 : cluster [INF] Manager daemon y is now available 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:14.584337+0000 mon.a (mon.0) 82 : cluster [INF] Manager daemon y is now available 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:14.601184+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:14.601184+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:14.602099+0000 mon.a (mon.0) 84 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:14.602099+0000 mon.a (mon.0) 84 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:14.602565+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:14.602565+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:15.559887+0000 mgr.y (mgr.14150) 1 : cephadm [INF] [09/Mar/2026:21:10:15] ENGINE Bus STARTING 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:15.559887+0000 mgr.y (mgr.14150) 1 : cephadm [INF] [09/Mar/2026:21:10:15] ENGINE Bus STARTING 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:15.574961+0000 mon.a (mon.0) 86 : cluster [DBG] mgrmap e11: y(active, since 1.01246s) 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:15.574961+0000 mon.a (mon.0) 86 : cluster [DBG] mgrmap e11: y(active, since 1.01246s) 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:15.576299+0000 mgr.y (mgr.14150) 2 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:15.576299+0000 mgr.y (mgr.14150) 2 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:15.580404+0000 mgr.y (mgr.14150) 3 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:15.580404+0000 mgr.y (mgr.14150) 3 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:15.661495+0000 mgr.y (mgr.14150) 4 : cephadm [INF] [09/Mar/2026:21:10:15] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:15.661495+0000 mgr.y (mgr.14150) 4 : cephadm [INF] [09/Mar/2026:21:10:15] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:15.774840+0000 mgr.y (mgr.14150) 5 : cephadm [INF] [09/Mar/2026:21:10:15] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:15.774840+0000 mgr.y (mgr.14150) 5 : cephadm [INF] [09/Mar/2026:21:10:15] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:15.774878+0000 mgr.y (mgr.14150) 6 : cephadm [INF] [09/Mar/2026:21:10:15] ENGINE Bus STARTED 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:15.774878+0000 mgr.y (mgr.14150) 6 : cephadm [INF] [09/Mar/2026:21:10:15] ENGINE Bus STARTED 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:15.775275+0000 mgr.y (mgr.14150) 7 : cephadm [INF] [09/Mar/2026:21:10:15] ENGINE Client ('192.168.123.107', 35926) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:15.775275+0000 mgr.y (mgr.14150) 7 : cephadm [INF] [09/Mar/2026:21:10:15] ENGINE Client ('192.168.123.107', 35926) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T21:10:45.951 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:15.853297+0000 mgr.y (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:15.853297+0000 mgr.y (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:15.909328+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:15.909328+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:15.911628+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:15.911628+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:16.167973+0000 mgr.y (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:16.167973+0000 mgr.y (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:16.320369+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:16.320369+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:16.614711+0000 mon.a (mon.0) 90 : audit [DBG] from='client.? 192.168.123.107:0/707623753' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:16.614711+0000 mon.a (mon.0) 90 : audit [DBG] from='client.? 192.168.123.107:0/707623753' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:16.914563+0000 mon.a (mon.0) 91 : audit [INF] from='client.? 192.168.123.107:0/2239191560' entity='client.admin' 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:16.914563+0000 mon.a (mon.0) 91 : audit [INF] from='client.? 192.168.123.107:0/2239191560' entity='client.admin' 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:17.324025+0000 mon.a (mon.0) 92 : cluster [DBG] mgrmap e12: y(active, since 2s) 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:17.324025+0000 mon.a (mon.0) 92 : cluster [DBG] mgrmap e12: y(active, since 2s) 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:19.499973+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:19.499973+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:20.065450+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:20.065450+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:21.505775+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e13: y(active, since 6s) 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:21.505775+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e13: y(active, since 6s) 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:21.815011+0000 mon.a (mon.0) 96 : audit [INF] from='client.? 192.168.123.107:0/1600342253' entity='client.admin' 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:21.815011+0000 mon.a (mon.0) 96 : audit [INF] from='client.? 192.168.123.107:0/1600342253' entity='client.admin' 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:25.746316+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:25.746316+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:25.748440+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:25.748440+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:25.748996+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:25.748996+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:25.751259+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:25.751259+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:25.756122+0000 mon.a (mon.0) 101 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:25.756122+0000 mon.a (mon.0) 101 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:25.758809+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:25.758809+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:26.648083+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:26.648083+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:26.648634+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:26.648634+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:26.649464+0000 mon.a (mon.0) 105 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:45.952 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:26.649464+0000 mon.a (mon.0) 105 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:26.649842+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:26.649842+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:26.645367+0000 mgr.y (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:26.645367+0000 mgr.y (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:26.650407+0000 mgr.y (mgr.14150) 11 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:26.650407+0000 mgr.y (mgr.14150) 11 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:26.687382+0000 mgr.y (mgr.14150) 12 : cephadm [INF] Updating vm07:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:26.687382+0000 mgr.y (mgr.14150) 12 : cephadm [INF] Updating vm07:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:26.729555+0000 mgr.y (mgr.14150) 13 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:26.729555+0000 mgr.y (mgr.14150) 13 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:26.762855+0000 mgr.y (mgr.14150) 14 : cephadm [INF] Updating vm07:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.client.admin.keyring 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:26.762855+0000 mgr.y (mgr.14150) 14 : cephadm [INF] Updating vm07:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.client.admin.keyring 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:26.798399+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:26.798399+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:26.800745+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:26.800745+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:26.802820+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:26.802820+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:30.702295+0000 mgr.y (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm10", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:30.702295+0000 mgr.y (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm10", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:31.234829+0000 mgr.y (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm10 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:31.234829+0000 mgr.y (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm10 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:32.533862+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:32.533862+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:32.534333+0000 mgr.y (mgr.14150) 17 : cephadm [INF] Added host vm10 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:32.534333+0000 mgr.y (mgr.14150) 17 : cephadm [INF] Added host vm10 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:32.534634+0000 mon.a (mon.0) 111 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:32.534634+0000 mon.a (mon.0) 111 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:32.805244+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:32.805244+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:34.071337+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:34.071337+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:34.617399+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:34.617399+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:34.578744+0000 mgr.y (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:34.578744+0000 mgr.y (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:36.578980+0000 mgr.y (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:36.578980+0000 mgr.y (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:37.333703+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:37.333703+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:37.335727+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.953 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:37.335727+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:37.338446+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:37.338446+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:37.340311+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:37.340311+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:37.340845+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:37.340845+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:37.341505+0000 mon.a (mon.0) 120 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:37.341505+0000 mon.a (mon.0) 120 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:37.342054+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:37.342054+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:37.342670+0000 mgr.y (mgr.14150) 20 : cephadm [INF] Updating vm10:/etc/ceph/ceph.conf 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:37.342670+0000 mgr.y (mgr.14150) 20 : cephadm [INF] Updating vm10:/etc/ceph/ceph.conf 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:37.383096+0000 mgr.y (mgr.14150) 21 : cephadm [INF] Updating vm10:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:37.383096+0000 mgr.y (mgr.14150) 21 : cephadm [INF] Updating vm10:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:37.412681+0000 mgr.y (mgr.14150) 22 : cephadm [INF] Updating vm10:/etc/ceph/ceph.client.admin.keyring 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:37.412681+0000 mgr.y (mgr.14150) 22 : cephadm [INF] Updating vm10:/etc/ceph/ceph.client.admin.keyring 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:37.445862+0000 mgr.y (mgr.14150) 23 : cephadm [INF] Updating vm10:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.client.admin.keyring 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:37.445862+0000 mgr.y (mgr.14150) 23 : cephadm [INF] Updating vm10:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.client.admin.keyring 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:37.475748+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:37.475748+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:37.478214+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:37.478214+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:37.480366+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:37.480366+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:37.539477+0000 mgr.y (mgr.14150) 24 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:37.539477+0000 mgr.y (mgr.14150) 24 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:38.579157+0000 mgr.y (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:38.579157+0000 mgr.y (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:40.579297+0000 mgr.y (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:40.579297+0000 mgr.y (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:41.602715+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.107:0/2472449452' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:41.602715+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.107:0/2472449452' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:42.376403+0000 mon.a (mon.0) 126 : audit [INF] from='client.? 192.168.123.107:0/2472449452' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:42.376403+0000 mon.a (mon.0) 126 : audit [INF] from='client.? 192.168.123.107:0/2472449452' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:42.377918+0000 mon.a (mon.0) 127 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:42.377918+0000 mon.a (mon.0) 127 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:42.579516+0000 mgr.y (mgr.14150) 27 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cluster 2026-03-09T21:10:42.579516+0000 mgr.y (mgr.14150) 27 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:43.915748+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:43.915748+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.954 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:43.916282+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:45.955 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:43.916282+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:45.955 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:43.917295+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:45.955 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:43.917295+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:45.955 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:43.917727+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:10:45.955 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:43.917727+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:10:45.955 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:43.920912+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.955 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:43.920912+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.955 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:43.921956+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T21:10:45.955 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:43.921956+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T21:10:45.955 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:43.922332+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:45.955 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:43.922332+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:45.955 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:43.912233+0000 mgr.y (mgr.14150) 28 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm07:192.168.123.107=a;vm07:[v2:192.168.123.107:3301,v1:192.168.123.107:6790]=c;vm10:192.168.123.110=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:45.955 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:43.912233+0000 mgr.y (mgr.14150) 28 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm07:192.168.123.107=a;vm07:[v2:192.168.123.107:3301,v1:192.168.123.107:6790]=c;vm10:192.168.123.110=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:45.955 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:43.913356+0000 mgr.y (mgr.14150) 29 : cephadm [INF] Saving service mon spec with placement vm07:192.168.123.107=a;vm07:[v2:192.168.123.107:3301,v1:192.168.123.107:6790]=c;vm10:192.168.123.110=b;count:3 2026-03-09T21:10:45.955 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:43.913356+0000 mgr.y (mgr.14150) 29 : cephadm [INF] Saving service mon spec with placement vm07:192.168.123.107=a;vm07:[v2:192.168.123.107:3301,v1:192.168.123.107:6790]=c;vm10:192.168.123.110=b;count:3 2026-03-09T21:10:45.955 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:43.922825+0000 mgr.y (mgr.14150) 30 : cephadm [INF] Deploying daemon mon.b on vm10 2026-03-09T21:10:45.955 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: cephadm 2026-03-09T21:10:43.922825+0000 mgr.y (mgr.14150) 30 : cephadm [INF] Deploying daemon mon.b on vm10 2026-03-09T21:10:45.955 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:45.305074+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.955 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:45.305074+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.955 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:45.307036+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.955 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:45.307036+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.955 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:45.309097+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.955 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:45.309097+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:45.955 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:45.309460+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T21:10:45.955 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:45.309460+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T21:10:45.955 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:45.309899+0000 mon.a (mon.0) 139 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:45.955 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: audit 2026-03-09T21:10:45.309899+0000 mon.a (mon.0) 139 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:45.955 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:45 vm10 bash[23387]: debug 2026-03-09T21:10:45.468+0000 7f974ae7f640 1 mon.b@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-09T21:10:46.239 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:45 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:10:46.240 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:46 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:10:46.240 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:45 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:10:46.240 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:46 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:10:46.240 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 systemd[1]: Started Ceph mon.c for 22c897f4-1bfc-11f1-adaa-13127443f8b3. 2026-03-09T21:10:46.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.327+0000 7f2d2d034d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-09T21:10:46.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.327+0000 7f2d2d034d80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-09T21:10:46.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.327+0000 7f2d2d034d80 0 pidfile_write: ignore empty --pid-file 2026-03-09T21:10:46.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 0 load: jerasure load: lrc 2026-03-09T21:10:46.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-09T21:10:46.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Git sha 0 2026-03-09T21:10:46.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T21:10:46.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: DB SUMMARY 2026-03-09T21:10:46.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: DB Session ID: 8JJMV87LBUIIZYN758I2 2026-03-09T21:10:46.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: CURRENT file: CURRENT 2026-03-09T21:10:46.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-09T21:10:46.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes 2026-03-09T21:10:46.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-c/store.db dir, Total Num: 0, files: 2026-03-09T21:10:46.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-c/store.db: 000004.log size: 511 ; 2026-03-09T21:10:46.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.error_if_exists: 0 2026-03-09T21:10:46.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.create_if_missing: 0 2026-03-09T21:10:46.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-09T21:10:46.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T21:10:46.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T21:10:46.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T21:10:46.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.env: 0x557235625dc0 2026-03-09T21:10:46.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-09T21:10:46.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.info_log: 0x5572546cd880 2026-03-09T21:10:46.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-09T21:10:46.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.statistics: (nil) 2026-03-09T21:10:46.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.use_fsync: 0 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.db_log_dir: 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.wal_dir: 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.write_buffer_manager: 0x5572546d1900 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.unordered_write: 0 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.row_cache: None 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.wal_filter: None 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.two_write_queues: 0 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.wal_compression: 0 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.atomic_flush: 0 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T21:10:46.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.max_open_files: -1 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Compression algorithms supported: 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: kZSTD supported: 0 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: kXpressCompression supported: 0 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: kZlibCompression supported: 1 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-c/store.db/MANIFEST-000005 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.merge_operator: 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.compaction_filter: None 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5572546cc480) 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cache_index_and_filter_blocks: 1 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: pin_top_level_index_and_filter: 1 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: index_type: 0 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: data_block_index_type: 0 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: index_shortening: 1 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: data_block_hash_table_util_ratio: 0.750000 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: checksum: 4 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: no_block_cache: 0 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: block_cache: 0x5572546f3350 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: block_cache_name: BinnedLRUCache 2026-03-09T21:10:46.618 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: block_cache_options: 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: capacity : 536870912 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: num_shard_bits : 4 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: strict_capacity_limit : 0 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: high_pri_pool_ratio: 0.000 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: block_cache_compressed: (nil) 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: persistent_cache: (nil) 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: block_size: 4096 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: block_size_deviation: 10 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: block_restart_interval: 16 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: index_block_restart_interval: 1 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: metadata_block_size: 4096 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: partition_filters: 0 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: use_delta_encoding: 1 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: filter_policy: bloomfilter 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: whole_key_filtering: 1 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: verify_compression: 0 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: read_amp_bytes_per_bit: 0 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: format_version: 5 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: enable_index_compression: 1 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: block_align: 0 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: max_auto_readahead_size: 262144 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: prepopulate_block_cache: 0 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: initial_auto_readahead_size: 8192 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: num_file_reads_for_auto_readahead: 2 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.compression: NoCompression 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.num_levels: 7 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T21:10:46.619 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.bloom_locality: 0 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.ttl: 2592000 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.enable_blob_files: false 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.min_blob_size: 0 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-c/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 465fef4d-d654-447f-b3c6-25a119bef54d 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773090646336029, "job": 1, "event": "recovery_started", "wal_files": [4]} 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.331+0000 7f2d2d034d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.339+0000 7f2d2d034d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773090646342177, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773090646, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "465fef4d-d654-447f-b3c6-25a119bef54d", "db_session_id": "8JJMV87LBUIIZYN758I2", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.339+0000 7f2d2d034d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773090646342557, "job": 1, "event": "recovery_finished"} 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.339+0000 7f2d2d034d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 10 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.339+0000 7f2d2d034d80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-c/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.339+0000 7f2d2d034d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5572546f4e00 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.339+0000 7f2d2d034d80 4 rocksdb: DB pointer 0x557254800000 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.343+0000 7f2d22dfe640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.343+0000 7f2d22dfe640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: ** DB Stats ** 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T21:10:46.620 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: ** Compaction Stats [default] ** 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: L0 1/0 1.60 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.3 0.01 0.00 1 0.006 0 0 0.0 0.0 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: Sum 1/0 1.60 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.3 0.01 0.00 1 0.006 0 0 0.0 0.0 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.3 0.01 0.00 1 0.006 0 0 0.0 0.0 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: ** Compaction Stats [default] ** 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.3 0.01 0.00 1 0.006 0 0 0.0 0.0 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: AddFile(Keys): cumulative 0, interval 0 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: Cumulative compaction: 0.00 GB write, 0.15 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: Interval compaction: 0.00 GB write, 0.15 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: Block cache BinnedLRUCache@0x5572546f3350#7 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.1e-05 secs_since: 0 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%) 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: ** File Read Latency Histogram By Level [default] ** 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.343+0000 7f2d2d034d80 0 mon.c does not exist in monmap, will attempt to join an existing cluster 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.343+0000 7f2d2d034d80 0 using public_addrv [v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0] 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.343+0000 7f2d2d034d80 0 starting mon.c rank -1 at public addrs [v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0] at bind addrs [v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0] mon_data /var/lib/ceph/mon/ceph-c fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.343+0000 7f2d2d034d80 1 mon.c@-1(???) e0 preinit fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.363+0000 7f2d25e04640 0 mon.c@-1(synchronizing).mds e1 new map 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.363+0000 7f2d25e04640 0 mon.c@-1(synchronizing).mds e1 print_map 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: e1 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: btime 2026-03-09T21:09:52:806787+0000 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: legacy client fscid: -1 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: No filesystems configured 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.363+0000 7f2d25e04640 1 mon.c@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.363+0000 7f2d25e04640 1 mon.c@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.363+0000 7f2d25e04640 1 mon.c@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.363+0000 7f2d25e04640 1 mon.c@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.363+0000 7f2d25e04640 1 mon.c@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.363+0000 7f2d25e04640 1 mon.c@-1(synchronizing).osd e4 e4: 0 total, 0 up, 0 in 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.363+0000 7f2d25e04640 0 mon.c@-1(synchronizing).osd e4 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.363+0000 7f2d25e04640 0 mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.363+0000 7f2d25e04640 0 mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.363+0000 7f2d25e04640 0 mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:52.807752+0000 mon.a (mon.0) 0 : cluster [INF] mkfs 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:52.807752+0000 mon.a (mon.0) 0 : cluster [INF] mkfs 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:52.800082+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:52.800082+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:53.923644+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:53.923644+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:53.923675+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:53.923675+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:53.923679+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:53.923679+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:53.923682+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-09T21:09:51.643158+0000 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:53.923682+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-09T21:09:51.643158+0000 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:53.923687+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-09T21:09:51.643158+0000 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:53.923687+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-09T21:09:51.643158+0000 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:53.923689+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:53.923689+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:53.923692+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:53.923692+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:53.923695+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-09T21:10:46.621 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:53.923695+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:53.923918+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:53.923918+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:53.923927+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:53.923927+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:53.924333+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:53.924333+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:09:53.993001+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.107:0/1076973491' entity='client.admin' 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:09:53.993001+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.107:0/1076973491' entity='client.admin' 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:09:54.568490+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.107:0/2876237503' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:09:54.568490+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.107:0/2876237503' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:09:56.811941+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.107:0/742307409' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:09:56.811941+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.107:0/742307409' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:57.800793+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon y 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:57.800793+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon y 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:57.806245+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: y(active, starting, since 0.00553332s) 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:57.806245+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: y(active, starting, since 0.00553332s) 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:09:57.808939+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:09:57.808939+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:09:57.809181+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:09:57.809181+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:09:57.809414+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:09:57.809414+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:09:57.810630+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:09:57.810630+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:09:57.810741+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:09:57.810741+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:57.819743+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon y is now available 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:57.819743+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon y is now available 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:09:57.834173+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:09:57.834173+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:09:57.838522+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:09:57.838522+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:09:57.840593+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:09:57.840593+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:09:57.842931+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:09:57.842931+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:09:57.849054+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:09:57.849054+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.107:0/2434474056' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:58.810845+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: y(active, since 1.01013s) 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:09:58.810845+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: y(active, since 1.01013s) 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:09:59.536915+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.107:0/1267886844' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:09:59.536915+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.107:0/1267886844' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:09:59.781874+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.107:0/4264837363' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:09:59.781874+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.107:0/4264837363' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:00.069310+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:00.069310+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:00.331092+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.107:0/585480375' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:00.331092+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.107:0/585480375' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:01.072160+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.107:0/585480375' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:01.072160+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.107:0/585480375' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:01.074102+0000 mon.a (mon.0) 34 : cluster [DBG] mgrmap e5: y(active, since 3s) 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:01.074102+0000 mon.a (mon.0) 34 : cluster [DBG] mgrmap e5: y(active, since 3s) 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:01.385753+0000 mon.a (mon.0) 35 : audit [DBG] from='client.? 192.168.123.107:0/558367829' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:01.385753+0000 mon.a (mon.0) 35 : audit [DBG] from='client.? 192.168.123.107:0/558367829' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:04.153207+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon y restarted 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:04.153207+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon y restarted 2026-03-09T21:10:46.622 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:04.153406+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon y 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:04.153406+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon y 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:04.157744+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:04.157744+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:04.158272+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e6: y(active, starting, since 0.00494476s) 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:04.158272+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e6: y(active, starting, since 0.00494476s) 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:04.161061+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:04.161061+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:04.161905+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:04.161905+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:04.162780+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:04.162780+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:04.162975+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:04.162975+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:04.163195+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:04.163195+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:04.169280+0000 mon.a (mon.0) 45 : cluster [INF] Manager daemon y is now available 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:04.169280+0000 mon.a (mon.0) 45 : cluster [INF] Manager daemon y is now available 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:04.177769+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:04.177769+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:04.180478+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:04.180478+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:04.194067+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:04.194067+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:04.194397+0000 mon.a (mon.0) 49 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:04.194397+0000 mon.a (mon.0) 49 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:04.195517+0000 mon.a (mon.0) 50 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:04.195517+0000 mon.a (mon.0) 50 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:04.196285+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:04.196285+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:04.175500+0000 mgr.y (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:04.175500+0000 mgr.y (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:04.657478+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:04.657478+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:04.659800+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:04.659800+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:05.167121+0000 mon.a (mon.0) 54 : cluster [DBG] mgrmap e7: y(active, since 1.01379s) 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:05.167121+0000 mon.a (mon.0) 54 : cluster [DBG] mgrmap e7: y(active, since 1.01379s) 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:05.419972+0000 mon.a (mon.0) 55 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:05.419972+0000 mon.a (mon.0) 55 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:05.509378+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:05.509378+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:05.514619+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:05.514619+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:05.160489+0000 mgr.y (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:05.160489+0000 mgr.y (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T21:10:46.623 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:05.164877+0000 mgr.y (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:05.164877+0000 mgr.y (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:05.205334+0000 mgr.y (mgr.14118) 4 : cephadm [INF] [09/Mar/2026:21:10:05] ENGINE Bus STARTING 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:05.205334+0000 mgr.y (mgr.14118) 4 : cephadm [INF] [09/Mar/2026:21:10:05] ENGINE Bus STARTING 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:05.306952+0000 mgr.y (mgr.14118) 5 : cephadm [INF] [09/Mar/2026:21:10:05] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:05.306952+0000 mgr.y (mgr.14118) 5 : cephadm [INF] [09/Mar/2026:21:10:05] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:05.419250+0000 mgr.y (mgr.14118) 6 : cephadm [INF] [09/Mar/2026:21:10:05] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:05.419250+0000 mgr.y (mgr.14118) 6 : cephadm [INF] [09/Mar/2026:21:10:05] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:05.419296+0000 mgr.y (mgr.14118) 7 : cephadm [INF] [09/Mar/2026:21:10:05] ENGINE Bus STARTED 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:05.419296+0000 mgr.y (mgr.14118) 7 : cephadm [INF] [09/Mar/2026:21:10:05] ENGINE Bus STARTED 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:05.419828+0000 mgr.y (mgr.14118) 8 : cephadm [INF] [09/Mar/2026:21:10:05] ENGINE Client ('192.168.123.107', 51620) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:05.419828+0000 mgr.y (mgr.14118) 8 : cephadm [INF] [09/Mar/2026:21:10:05] ENGINE Client ('192.168.123.107', 51620) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:05.505590+0000 mgr.y (mgr.14118) 9 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:05.505590+0000 mgr.y (mgr.14118) 9 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:05.762325+0000 mgr.y (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:05.762325+0000 mgr.y (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:06.002761+0000 mgr.y (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:06.002761+0000 mgr.y (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:06.003001+0000 mgr.y (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:06.003001+0000 mgr.y (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:06.165888+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:06.165888+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:06.184000+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:06.184000+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:06.519557+0000 mgr.y (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:06.519557+0000 mgr.y (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:06.662102+0000 mon.a (mon.0) 60 : cluster [DBG] mgrmap e8: y(active, since 2s) 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:06.662102+0000 mon.a (mon.0) 60 : cluster [DBG] mgrmap e8: y(active, since 2s) 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:06.769207+0000 mgr.y (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm07", "addr": "192.168.123.107", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:06.769207+0000 mgr.y (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm07", "addr": "192.168.123.107", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:07.439659+0000 mgr.y (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm07 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:07.439659+0000 mgr.y (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm07 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:08.835036+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:08.835036+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:08.836007+0000 mgr.y (mgr.14118) 16 : cephadm [INF] Added host vm07 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:08.836007+0000 mgr.y (mgr.14118) 16 : cephadm [INF] Added host vm07 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:08.836365+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:08.836365+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:09.126387+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:09.126387+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:09.394945+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:09.394945+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:09.639032+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.107:0/767791357' entity='client.admin' 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:09.639032+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.107:0/767791357' entity='client.admin' 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:09.122701+0000 mgr.y (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:09.122701+0000 mgr.y (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:09.123590+0000 mgr.y (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:09.123590+0000 mgr.y (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:09.391162+0000 mgr.y (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:09.391162+0000 mgr.y (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:09.391891+0000 mgr.y (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:09.391891+0000 mgr.y (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:10.078303+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.107:0/1146735487' entity='client.admin' 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:10.078303+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.107:0/1146735487' entity='client.admin' 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:10.317109+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:10.317109+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:10.452588+0000 mon.a (mon.0) 68 : audit [INF] from='client.? 192.168.123.107:0/4237981629' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:10.452588+0000 mon.a (mon.0) 68 : audit [INF] from='client.? 192.168.123.107:0/4237981629' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:10.600723+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:10.600723+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.14118 192.168.123.107:0/3114733068' entity='mgr.y' 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:11.406718+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.107:0/4237981629' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:11.406718+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.107:0/4237981629' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:11.413262+0000 mon.a (mon.0) 71 : cluster [DBG] mgrmap e9: y(active, since 7s) 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:11.413262+0000 mon.a (mon.0) 71 : cluster [DBG] mgrmap e9: y(active, since 7s) 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:11.763406+0000 mon.a (mon.0) 72 : audit [DBG] from='client.? 192.168.123.107:0/4131694319' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T21:10:46.624 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:11.763406+0000 mon.a (mon.0) 72 : audit [DBG] from='client.? 192.168.123.107:0/4131694319' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:14.562105+0000 mon.a (mon.0) 73 : cluster [INF] Active manager daemon y restarted 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:14.562105+0000 mon.a (mon.0) 73 : cluster [INF] Active manager daemon y restarted 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:14.562677+0000 mon.a (mon.0) 74 : cluster [INF] Activating manager daemon y 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:14.562677+0000 mon.a (mon.0) 74 : cluster [INF] Activating manager daemon y 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:14.573746+0000 mon.a (mon.0) 75 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:14.573746+0000 mon.a (mon.0) 75 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:14.573925+0000 mon.a (mon.0) 76 : cluster [DBG] mgrmap e10: y(active, starting, since 0.0114195s) 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:14.573925+0000 mon.a (mon.0) 76 : cluster [DBG] mgrmap e10: y(active, starting, since 0.0114195s) 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:14.576261+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:14.576261+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:14.577197+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:14.577197+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:14.577757+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:14.577757+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:14.578101+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:14.578101+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:14.578450+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:14.578450+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:14.584337+0000 mon.a (mon.0) 82 : cluster [INF] Manager daemon y is now available 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:14.584337+0000 mon.a (mon.0) 82 : cluster [INF] Manager daemon y is now available 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:14.601184+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:14.601184+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:14.602099+0000 mon.a (mon.0) 84 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:14.602099+0000 mon.a (mon.0) 84 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:14.602565+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:14.602565+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:15.559887+0000 mgr.y (mgr.14150) 1 : cephadm [INF] [09/Mar/2026:21:10:15] ENGINE Bus STARTING 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:15.559887+0000 mgr.y (mgr.14150) 1 : cephadm [INF] [09/Mar/2026:21:10:15] ENGINE Bus STARTING 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:15.574961+0000 mon.a (mon.0) 86 : cluster [DBG] mgrmap e11: y(active, since 1.01246s) 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:15.574961+0000 mon.a (mon.0) 86 : cluster [DBG] mgrmap e11: y(active, since 1.01246s) 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:15.576299+0000 mgr.y (mgr.14150) 2 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:15.576299+0000 mgr.y (mgr.14150) 2 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:15.580404+0000 mgr.y (mgr.14150) 3 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:15.580404+0000 mgr.y (mgr.14150) 3 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:15.661495+0000 mgr.y (mgr.14150) 4 : cephadm [INF] [09/Mar/2026:21:10:15] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:15.661495+0000 mgr.y (mgr.14150) 4 : cephadm [INF] [09/Mar/2026:21:10:15] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:15.774840+0000 mgr.y (mgr.14150) 5 : cephadm [INF] [09/Mar/2026:21:10:15] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:15.774840+0000 mgr.y (mgr.14150) 5 : cephadm [INF] [09/Mar/2026:21:10:15] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:15.774878+0000 mgr.y (mgr.14150) 6 : cephadm [INF] [09/Mar/2026:21:10:15] ENGINE Bus STARTED 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:15.774878+0000 mgr.y (mgr.14150) 6 : cephadm [INF] [09/Mar/2026:21:10:15] ENGINE Bus STARTED 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:15.775275+0000 mgr.y (mgr.14150) 7 : cephadm [INF] [09/Mar/2026:21:10:15] ENGINE Client ('192.168.123.107', 35926) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:15.775275+0000 mgr.y (mgr.14150) 7 : cephadm [INF] [09/Mar/2026:21:10:15] ENGINE Client ('192.168.123.107', 35926) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:15.853297+0000 mgr.y (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:15.853297+0000 mgr.y (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:15.909328+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:15.909328+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:15.911628+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:15.911628+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:16.167973+0000 mgr.y (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:16.167973+0000 mgr.y (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:16.320369+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:16.320369+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:16.614711+0000 mon.a (mon.0) 90 : audit [DBG] from='client.? 192.168.123.107:0/707623753' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:16.614711+0000 mon.a (mon.0) 90 : audit [DBG] from='client.? 192.168.123.107:0/707623753' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:16.914563+0000 mon.a (mon.0) 91 : audit [INF] from='client.? 192.168.123.107:0/2239191560' entity='client.admin' 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:16.914563+0000 mon.a (mon.0) 91 : audit [INF] from='client.? 192.168.123.107:0/2239191560' entity='client.admin' 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:17.324025+0000 mon.a (mon.0) 92 : cluster [DBG] mgrmap e12: y(active, since 2s) 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:17.324025+0000 mon.a (mon.0) 92 : cluster [DBG] mgrmap e12: y(active, since 2s) 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:19.499973+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:19.499973+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:20.065450+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:20.065450+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:21.505775+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e13: y(active, since 6s) 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:21.505775+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e13: y(active, since 6s) 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:21.815011+0000 mon.a (mon.0) 96 : audit [INF] from='client.? 192.168.123.107:0/1600342253' entity='client.admin' 2026-03-09T21:10:46.625 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:21.815011+0000 mon.a (mon.0) 96 : audit [INF] from='client.? 192.168.123.107:0/1600342253' entity='client.admin' 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:25.746316+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:25.746316+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:25.748440+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:25.748440+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:25.748996+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:25.748996+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:25.751259+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:25.751259+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:25.756122+0000 mon.a (mon.0) 101 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:25.756122+0000 mon.a (mon.0) 101 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:25.758809+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:25.758809+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:26.648083+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:26.648083+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:26.648634+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:26.648634+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:26.649464+0000 mon.a (mon.0) 105 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:26.649464+0000 mon.a (mon.0) 105 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:26.649842+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:26.649842+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:26.645367+0000 mgr.y (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:26.645367+0000 mgr.y (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:26.650407+0000 mgr.y (mgr.14150) 11 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:26.650407+0000 mgr.y (mgr.14150) 11 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:26.687382+0000 mgr.y (mgr.14150) 12 : cephadm [INF] Updating vm07:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:26.687382+0000 mgr.y (mgr.14150) 12 : cephadm [INF] Updating vm07:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:26.729555+0000 mgr.y (mgr.14150) 13 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:26.729555+0000 mgr.y (mgr.14150) 13 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:26.762855+0000 mgr.y (mgr.14150) 14 : cephadm [INF] Updating vm07:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.client.admin.keyring 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:26.762855+0000 mgr.y (mgr.14150) 14 : cephadm [INF] Updating vm07:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.client.admin.keyring 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:26.798399+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:26.798399+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:26.800745+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:26.800745+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:26.802820+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:26.802820+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:30.702295+0000 mgr.y (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm10", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:30.702295+0000 mgr.y (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm10", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:31.234829+0000 mgr.y (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm10 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:31.234829+0000 mgr.y (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm10 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:32.533862+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:32.533862+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:32.534333+0000 mgr.y (mgr.14150) 17 : cephadm [INF] Added host vm10 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:32.534333+0000 mgr.y (mgr.14150) 17 : cephadm [INF] Added host vm10 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:32.534634+0000 mon.a (mon.0) 111 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:32.534634+0000 mon.a (mon.0) 111 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:32.805244+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:32.805244+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:34.071337+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:34.071337+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:34.617399+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:34.617399+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:34.578744+0000 mgr.y (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:34.578744+0000 mgr.y (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:36.578980+0000 mgr.y (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:36.578980+0000 mgr.y (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:37.333703+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:37.333703+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.626 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:37.335727+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:37.335727+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:37.338446+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:37.338446+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:37.340311+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:37.340311+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:37.340845+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:37.340845+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:37.341505+0000 mon.a (mon.0) 120 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:37.341505+0000 mon.a (mon.0) 120 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:37.342054+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:37.342054+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:37.342670+0000 mgr.y (mgr.14150) 20 : cephadm [INF] Updating vm10:/etc/ceph/ceph.conf 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:37.342670+0000 mgr.y (mgr.14150) 20 : cephadm [INF] Updating vm10:/etc/ceph/ceph.conf 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:37.383096+0000 mgr.y (mgr.14150) 21 : cephadm [INF] Updating vm10:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:37.383096+0000 mgr.y (mgr.14150) 21 : cephadm [INF] Updating vm10:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:37.412681+0000 mgr.y (mgr.14150) 22 : cephadm [INF] Updating vm10:/etc/ceph/ceph.client.admin.keyring 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:37.412681+0000 mgr.y (mgr.14150) 22 : cephadm [INF] Updating vm10:/etc/ceph/ceph.client.admin.keyring 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:37.445862+0000 mgr.y (mgr.14150) 23 : cephadm [INF] Updating vm10:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.client.admin.keyring 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:37.445862+0000 mgr.y (mgr.14150) 23 : cephadm [INF] Updating vm10:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.client.admin.keyring 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:37.475748+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:37.475748+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:37.478214+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:37.478214+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:37.480366+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:37.480366+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:37.539477+0000 mgr.y (mgr.14150) 24 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:37.539477+0000 mgr.y (mgr.14150) 24 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:38.579157+0000 mgr.y (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:38.579157+0000 mgr.y (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:40.579297+0000 mgr.y (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:40.579297+0000 mgr.y (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:41.602715+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.107:0/2472449452' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:41.602715+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.107:0/2472449452' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:42.376403+0000 mon.a (mon.0) 126 : audit [INF] from='client.? 192.168.123.107:0/2472449452' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:42.376403+0000 mon.a (mon.0) 126 : audit [INF] from='client.? 192.168.123.107:0/2472449452' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:42.377918+0000 mon.a (mon.0) 127 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:42.377918+0000 mon.a (mon.0) 127 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:42.579516+0000 mgr.y (mgr.14150) 27 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cluster 2026-03-09T21:10:42.579516+0000 mgr.y (mgr.14150) 27 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:43.915748+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:43.915748+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:43.916282+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:43.916282+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:43.917295+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:43.917295+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:43.917727+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:43.917727+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:43.920912+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:43.920912+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:43.921956+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:43.921956+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:43.922332+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:43.922332+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:43.912233+0000 mgr.y (mgr.14150) 28 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm07:192.168.123.107=a;vm07:[v2:192.168.123.107:3301,v1:192.168.123.107:6790]=c;vm10:192.168.123.110=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:43.912233+0000 mgr.y (mgr.14150) 28 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm07:192.168.123.107=a;vm07:[v2:192.168.123.107:3301,v1:192.168.123.107:6790]=c;vm10:192.168.123.110=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:43.913356+0000 mgr.y (mgr.14150) 29 : cephadm [INF] Saving service mon spec with placement vm07:192.168.123.107=a;vm07:[v2:192.168.123.107:3301,v1:192.168.123.107:6790]=c;vm10:192.168.123.110=b;count:3 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:43.913356+0000 mgr.y (mgr.14150) 29 : cephadm [INF] Saving service mon spec with placement vm07:192.168.123.107=a;vm07:[v2:192.168.123.107:3301,v1:192.168.123.107:6790]=c;vm10:192.168.123.110=b;count:3 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:43.922825+0000 mgr.y (mgr.14150) 30 : cephadm [INF] Deploying daemon mon.b on vm10 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: cephadm 2026-03-09T21:10:43.922825+0000 mgr.y (mgr.14150) 30 : cephadm [INF] Deploying daemon mon.b on vm10 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:45.305074+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:45.305074+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:45.307036+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:45.307036+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:45.309097+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:45.309097+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:46.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:45.309460+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T21:10:46.628 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:45.309460+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T21:10:46.628 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:45.309899+0000 mon.a (mon.0) 139 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:46.628 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: audit 2026-03-09T21:10:45.309899+0000 mon.a (mon.0) 139 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:46.628 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:46 vm07 bash[28052]: debug 2026-03-09T21:10:46.379+0000 7f2d25e04640 1 mon.c@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-09T21:10:50.495 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:10:50.495 INFO:teuthology.orchestra.run.vm10.stdout:{"epoch":2,"fsid":"22c897f4-1bfc-11f1-adaa-13127443f8b3","modified":"2026-03-09T21:10:45.475940Z","created":"2026-03-09T21:09:51.643158Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:3300","nonce":0},{"type":"v1","addr":"192.168.123.110:6789","nonce":0}]},"addr":"192.168.123.110:6789/0","public_addr":"192.168.123.110:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-09T21:10:50.495 INFO:teuthology.orchestra.run.vm10.stderr:dumped monmap epoch 2 2026-03-09T21:10:50.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:44.579739+0000 mgr.y (mgr.14150) 31 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:50.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:44.579739+0000 mgr.y (mgr.14150) 31 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:50.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cephadm 2026-03-09T21:10:45.310414+0000 mgr.y (mgr.14150) 32 : cephadm [INF] Deploying daemon mon.c on vm07 2026-03-09T21:10:50.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cephadm 2026-03-09T21:10:45.310414+0000 mgr.y (mgr.14150) 32 : cephadm [INF] Deploying daemon mon.c on vm07 2026-03-09T21:10:50.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:45.478400+0000 mon.a (mon.0) 141 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:10:50.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:45.478400+0000 mon.a (mon.0) 141 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:10:50.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:45.478511+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:50.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:45.478511+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:50.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:45.483352+0000 mon.a (mon.0) 143 : cluster [INF] mon.a calling monitor election 2026-03-09T21:10:50.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:45.483352+0000 mon.a (mon.0) 143 : cluster [INF] mon.a calling monitor election 2026-03-09T21:10:50.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:45.567070+0000 mon.a (mon.0) 144 : audit [DBG] from='client.? 192.168.123.110:0/2807309359' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T21:10:50.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:45.567070+0000 mon.a (mon.0) 144 : audit [DBG] from='client.? 192.168.123.110:0/2807309359' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T21:10:50.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:46.388110+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:50.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:46.388110+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:50.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:46.475363+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:50.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:46.475363+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:50.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:46.579978+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:50.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:46.579978+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:47.388453+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:47.388453+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:47.475451+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:47.475451+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:47.478306+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:47.478306+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:48.388474+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:48.388474+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:48.475716+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:48.475716+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:48.580202+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:48.580202+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:49.388783+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:49.388783+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:49.475642+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:49.475642+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:50.388643+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:50.388643+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:50.475735+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:50.475735+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:50.489066+0000 mon.a (mon.0) 155 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:50.489066+0000 mon.a (mon.0) 155 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:50.493847+0000 mon.a (mon.0) 156 : cluster [DBG] monmap epoch 2 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:50.493847+0000 mon.a (mon.0) 156 : cluster [DBG] monmap epoch 2 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:50.493882+0000 mon.a (mon.0) 157 : cluster [DBG] fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:50.493882+0000 mon.a (mon.0) 157 : cluster [DBG] fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:50.493892+0000 mon.a (mon.0) 158 : cluster [DBG] last_changed 2026-03-09T21:10:45.475940+0000 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:50.493892+0000 mon.a (mon.0) 158 : cluster [DBG] last_changed 2026-03-09T21:10:45.475940+0000 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:50.493897+0000 mon.a (mon.0) 159 : cluster [DBG] created 2026-03-09T21:09:51.643158+0000 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:50.493897+0000 mon.a (mon.0) 159 : cluster [DBG] created 2026-03-09T21:09:51.643158+0000 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:50.493902+0000 mon.a (mon.0) 160 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:50.493902+0000 mon.a (mon.0) 160 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:50.493907+0000 mon.a (mon.0) 161 : cluster [DBG] election_strategy: 1 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:50.493907+0000 mon.a (mon.0) 161 : cluster [DBG] election_strategy: 1 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:50.493911+0000 mon.a (mon.0) 162 : cluster [DBG] 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:50.493911+0000 mon.a (mon.0) 162 : cluster [DBG] 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:50.493915+0000 mon.a (mon.0) 163 : cluster [DBG] 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.b 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:50.493915+0000 mon.a (mon.0) 163 : cluster [DBG] 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.b 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:50.494264+0000 mon.a (mon.0) 164 : cluster [DBG] fsmap 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:50.494264+0000 mon.a (mon.0) 164 : cluster [DBG] fsmap 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:50.494277+0000 mon.a (mon.0) 165 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:50.494277+0000 mon.a (mon.0) 165 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:50.494384+0000 mon.a (mon.0) 166 : cluster [DBG] mgrmap e13: y(active, since 35s) 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:50.494384+0000 mon.a (mon.0) 166 : cluster [DBG] mgrmap e13: y(active, since 35s) 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:50.494456+0000 mon.a (mon.0) 167 : cluster [INF] overall HEALTH_OK 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: cluster 2026-03-09T21:10:50.494456+0000 mon.a (mon.0) 167 : cluster [INF] overall HEALTH_OK 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:50.498940+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:50.498940+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:50.506102+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:50.506102+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:50.512925+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:50.512925+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:50.520311+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:50.520311+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:50.532418+0000 mon.a (mon.0) 172 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:50.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:50 vm07 bash[20771]: audit 2026-03-09T21:10:50.532418+0000 mon.a (mon.0) 172 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:44.579739+0000 mgr.y (mgr.14150) 31 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:44.579739+0000 mgr.y (mgr.14150) 31 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cephadm 2026-03-09T21:10:45.310414+0000 mgr.y (mgr.14150) 32 : cephadm [INF] Deploying daemon mon.c on vm07 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cephadm 2026-03-09T21:10:45.310414+0000 mgr.y (mgr.14150) 32 : cephadm [INF] Deploying daemon mon.c on vm07 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:45.478400+0000 mon.a (mon.0) 141 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:45.478400+0000 mon.a (mon.0) 141 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:45.478511+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:45.478511+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:45.483352+0000 mon.a (mon.0) 143 : cluster [INF] mon.a calling monitor election 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:45.483352+0000 mon.a (mon.0) 143 : cluster [INF] mon.a calling monitor election 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:45.567070+0000 mon.a (mon.0) 144 : audit [DBG] from='client.? 192.168.123.110:0/2807309359' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:45.567070+0000 mon.a (mon.0) 144 : audit [DBG] from='client.? 192.168.123.110:0/2807309359' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:46.388110+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:46.388110+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:46.475363+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:46.475363+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:46.579978+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:46.579978+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:47.388453+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:47.388453+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:47.475451+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:47.475451+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:47.478306+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:47.478306+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:48.388474+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:48.388474+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:48.475716+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:48.475716+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:48.580202+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:48.580202+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:49.388783+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:49.388783+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:49.475642+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:49.475642+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:50.388643+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:50.388643+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:50.475735+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:50.475735+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:50.489066+0000 mon.a (mon.0) 155 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:50.489066+0000 mon.a (mon.0) 155 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:50.493847+0000 mon.a (mon.0) 156 : cluster [DBG] monmap epoch 2 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:50.493847+0000 mon.a (mon.0) 156 : cluster [DBG] monmap epoch 2 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:50.493882+0000 mon.a (mon.0) 157 : cluster [DBG] fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:50.493882+0000 mon.a (mon.0) 157 : cluster [DBG] fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:50.493892+0000 mon.a (mon.0) 158 : cluster [DBG] last_changed 2026-03-09T21:10:45.475940+0000 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:50.493892+0000 mon.a (mon.0) 158 : cluster [DBG] last_changed 2026-03-09T21:10:45.475940+0000 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:50.493897+0000 mon.a (mon.0) 159 : cluster [DBG] created 2026-03-09T21:09:51.643158+0000 2026-03-09T21:10:50.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:50.493897+0000 mon.a (mon.0) 159 : cluster [DBG] created 2026-03-09T21:09:51.643158+0000 2026-03-09T21:10:50.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:50.493902+0000 mon.a (mon.0) 160 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T21:10:50.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:50.493902+0000 mon.a (mon.0) 160 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T21:10:50.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:50.493907+0000 mon.a (mon.0) 161 : cluster [DBG] election_strategy: 1 2026-03-09T21:10:50.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:50.493907+0000 mon.a (mon.0) 161 : cluster [DBG] election_strategy: 1 2026-03-09T21:10:50.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:50.493911+0000 mon.a (mon.0) 162 : cluster [DBG] 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-09T21:10:50.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:50.493911+0000 mon.a (mon.0) 162 : cluster [DBG] 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-09T21:10:50.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:50.493915+0000 mon.a (mon.0) 163 : cluster [DBG] 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.b 2026-03-09T21:10:50.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:50.493915+0000 mon.a (mon.0) 163 : cluster [DBG] 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.b 2026-03-09T21:10:50.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:50.494264+0000 mon.a (mon.0) 164 : cluster [DBG] fsmap 2026-03-09T21:10:50.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:50.494264+0000 mon.a (mon.0) 164 : cluster [DBG] fsmap 2026-03-09T21:10:50.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:50.494277+0000 mon.a (mon.0) 165 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T21:10:50.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:50.494277+0000 mon.a (mon.0) 165 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T21:10:50.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:50.494384+0000 mon.a (mon.0) 166 : cluster [DBG] mgrmap e13: y(active, since 35s) 2026-03-09T21:10:50.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:50.494384+0000 mon.a (mon.0) 166 : cluster [DBG] mgrmap e13: y(active, since 35s) 2026-03-09T21:10:50.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:50.494456+0000 mon.a (mon.0) 167 : cluster [INF] overall HEALTH_OK 2026-03-09T21:10:50.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: cluster 2026-03-09T21:10:50.494456+0000 mon.a (mon.0) 167 : cluster [INF] overall HEALTH_OK 2026-03-09T21:10:50.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:50.498940+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:50.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:50.498940+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:50.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:50.506102+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:50.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:50.506102+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:50.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:50.512925+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:50.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:50.512925+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:50.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:50.520311+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:50.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:50.520311+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:50.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:50.532418+0000 mon.a (mon.0) 172 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:50.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:50 vm10 bash[23387]: audit 2026-03-09T21:10:50.532418+0000 mon.a (mon.0) 172 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:51.576 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-09T21:10:51.576 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph mon dump -f json 2026-03-09T21:10:51.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:51 vm07 bash[20771]: cluster 2026-03-09T21:10:50.580422+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:51.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:51 vm07 bash[20771]: cluster 2026-03-09T21:10:50.580422+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:51.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:51 vm07 bash[20771]: audit 2026-03-09T21:10:51.388872+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:51.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:51 vm07 bash[20771]: audit 2026-03-09T21:10:51.388872+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:51.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:51 vm07 bash[20771]: audit 2026-03-09T21:10:51.475866+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:51.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:51 vm07 bash[20771]: audit 2026-03-09T21:10:51.475866+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:51.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:51 vm10 bash[23387]: cluster 2026-03-09T21:10:50.580422+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:51.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:51 vm10 bash[23387]: cluster 2026-03-09T21:10:50.580422+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:51.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:51 vm10 bash[23387]: audit 2026-03-09T21:10:51.388872+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:51.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:51 vm10 bash[23387]: audit 2026-03-09T21:10:51.388872+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:51.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:51 vm10 bash[23387]: audit 2026-03-09T21:10:51.475866+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:51.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:51 vm10 bash[23387]: audit 2026-03-09T21:10:51.475866+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:55.316 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.b/config 2026-03-09T21:10:57.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: audit 2026-03-09T21:10:52.394202+0000 mon.a (mon.0) 176 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:10:57.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: audit 2026-03-09T21:10:52.394202+0000 mon.a (mon.0) 176 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:10:57.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:52.394282+0000 mon.a (mon.0) 177 : cluster [INF] mon.a calling monitor election 2026-03-09T21:10:57.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:52.394282+0000 mon.a (mon.0) 177 : cluster [INF] mon.a calling monitor election 2026-03-09T21:10:57.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: audit 2026-03-09T21:10:52.395382+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:57.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: audit 2026-03-09T21:10:52.395382+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:57.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: audit 2026-03-09T21:10:52.395419+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: audit 2026-03-09T21:10:52.395419+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:52.396016+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T21:10:57.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:52.396016+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T21:10:57.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:52.580601+0000 mgr.y (mgr.14150) 36 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:57.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:52.580601+0000 mgr.y (mgr.14150) 36 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:57.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: audit 2026-03-09T21:10:53.389281+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: audit 2026-03-09T21:10:53.389281+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: audit 2026-03-09T21:10:54.389159+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: audit 2026-03-09T21:10:54.389159+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:54.390423+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:54.390423+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:54.580816+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:54.580816+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: audit 2026-03-09T21:10:55.389451+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: audit 2026-03-09T21:10:55.389451+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: audit 2026-03-09T21:10:56.389212+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: audit 2026-03-09T21:10:56.389212+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: audit 2026-03-09T21:10:57.389087+0000 mon.a (mon.0) 184 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: audit 2026-03-09T21:10:57.389087+0000 mon.a (mon.0) 184 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:57.398898+0000 mon.a (mon.0) 185 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:57.398898+0000 mon.a (mon.0) 185 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:57.402448+0000 mon.a (mon.0) 186 : cluster [DBG] monmap epoch 3 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:57.402448+0000 mon.a (mon.0) 186 : cluster [DBG] monmap epoch 3 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:57.402464+0000 mon.a (mon.0) 187 : cluster [DBG] fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:57.402464+0000 mon.a (mon.0) 187 : cluster [DBG] fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:57.402475+0000 mon.a (mon.0) 188 : cluster [DBG] last_changed 2026-03-09T21:10:52.389541+0000 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:57.402475+0000 mon.a (mon.0) 188 : cluster [DBG] last_changed 2026-03-09T21:10:52.389541+0000 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:57.402485+0000 mon.a (mon.0) 189 : cluster [DBG] created 2026-03-09T21:09:51.643158+0000 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:57.402485+0000 mon.a (mon.0) 189 : cluster [DBG] created 2026-03-09T21:09:51.643158+0000 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:57.402495+0000 mon.a (mon.0) 190 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:57.402495+0000 mon.a (mon.0) 190 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:57.402505+0000 mon.a (mon.0) 191 : cluster [DBG] election_strategy: 1 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:57.402505+0000 mon.a (mon.0) 191 : cluster [DBG] election_strategy: 1 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:57.402516+0000 mon.a (mon.0) 192 : cluster [DBG] 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:57.402516+0000 mon.a (mon.0) 192 : cluster [DBG] 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:57.402525+0000 mon.a (mon.0) 193 : cluster [DBG] 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.b 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:57.402525+0000 mon.a (mon.0) 193 : cluster [DBG] 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.b 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:57.402534+0000 mon.a (mon.0) 194 : cluster [DBG] 2: [v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0] mon.c 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:57.402534+0000 mon.a (mon.0) 194 : cluster [DBG] 2: [v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0] mon.c 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:57.402798+0000 mon.a (mon.0) 195 : cluster [DBG] fsmap 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:57.402798+0000 mon.a (mon.0) 195 : cluster [DBG] fsmap 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:57.402820+0000 mon.a (mon.0) 196 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:57.402820+0000 mon.a (mon.0) 196 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:57.402966+0000 mon.a (mon.0) 197 : cluster [DBG] mgrmap e13: y(active, since 42s) 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:57.402966+0000 mon.a (mon.0) 197 : cluster [DBG] mgrmap e13: y(active, since 42s) 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:57.403094+0000 mon.a (mon.0) 198 : cluster [INF] overall HEALTH_OK 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:57 vm07 bash[20771]: cluster 2026-03-09T21:10:57.403094+0000 mon.a (mon.0) 198 : cluster [INF] overall HEALTH_OK 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:44.579739+0000 mgr.y (mgr.14150) 31 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:44.579739+0000 mgr.y (mgr.14150) 31 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cephadm 2026-03-09T21:10:45.310414+0000 mgr.y (mgr.14150) 32 : cephadm [INF] Deploying daemon mon.c on vm07 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cephadm 2026-03-09T21:10:45.310414+0000 mgr.y (mgr.14150) 32 : cephadm [INF] Deploying daemon mon.c on vm07 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:45.478400+0000 mon.a (mon.0) 141 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:45.478400+0000 mon.a (mon.0) 141 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:45.478511+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:45.478511+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:45.483352+0000 mon.a (mon.0) 143 : cluster [INF] mon.a calling monitor election 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:45.483352+0000 mon.a (mon.0) 143 : cluster [INF] mon.a calling monitor election 2026-03-09T21:10:57.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:45.567070+0000 mon.a (mon.0) 144 : audit [DBG] from='client.? 192.168.123.110:0/2807309359' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:45.567070+0000 mon.a (mon.0) 144 : audit [DBG] from='client.? 192.168.123.110:0/2807309359' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:46.388110+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:46.388110+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:46.475363+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:46.475363+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:46.579978+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:46.579978+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:47.388453+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:47.388453+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:47.475451+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:47.475451+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:47.478306+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:47.478306+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:48.388474+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:48.388474+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:48.475716+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:48.475716+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:48.580202+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:48.580202+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:49.388783+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:49.388783+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:49.475642+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:49.475642+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:50.388643+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:50.388643+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:50.475735+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:50.475735+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:50.489066+0000 mon.a (mon.0) 155 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:50.489066+0000 mon.a (mon.0) 155 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:50.493847+0000 mon.a (mon.0) 156 : cluster [DBG] monmap epoch 2 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:50.493847+0000 mon.a (mon.0) 156 : cluster [DBG] monmap epoch 2 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:50.493882+0000 mon.a (mon.0) 157 : cluster [DBG] fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:50.493882+0000 mon.a (mon.0) 157 : cluster [DBG] fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:50.493892+0000 mon.a (mon.0) 158 : cluster [DBG] last_changed 2026-03-09T21:10:45.475940+0000 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:50.493892+0000 mon.a (mon.0) 158 : cluster [DBG] last_changed 2026-03-09T21:10:45.475940+0000 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:50.493897+0000 mon.a (mon.0) 159 : cluster [DBG] created 2026-03-09T21:09:51.643158+0000 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:50.493897+0000 mon.a (mon.0) 159 : cluster [DBG] created 2026-03-09T21:09:51.643158+0000 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:50.493902+0000 mon.a (mon.0) 160 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:50.493902+0000 mon.a (mon.0) 160 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:50.493907+0000 mon.a (mon.0) 161 : cluster [DBG] election_strategy: 1 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:50.493907+0000 mon.a (mon.0) 161 : cluster [DBG] election_strategy: 1 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:50.493911+0000 mon.a (mon.0) 162 : cluster [DBG] 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:50.493911+0000 mon.a (mon.0) 162 : cluster [DBG] 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:50.493915+0000 mon.a (mon.0) 163 : cluster [DBG] 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.b 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:50.493915+0000 mon.a (mon.0) 163 : cluster [DBG] 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.b 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:50.494264+0000 mon.a (mon.0) 164 : cluster [DBG] fsmap 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:50.494264+0000 mon.a (mon.0) 164 : cluster [DBG] fsmap 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:50.494277+0000 mon.a (mon.0) 165 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:50.494277+0000 mon.a (mon.0) 165 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:50.494384+0000 mon.a (mon.0) 166 : cluster [DBG] mgrmap e13: y(active, since 35s) 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:50.494384+0000 mon.a (mon.0) 166 : cluster [DBG] mgrmap e13: y(active, since 35s) 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:50.494456+0000 mon.a (mon.0) 167 : cluster [INF] overall HEALTH_OK 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:50.494456+0000 mon.a (mon.0) 167 : cluster [INF] overall HEALTH_OK 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:50.498940+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:50.498940+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:50.506102+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:50.506102+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:50.512925+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:50.512925+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:50.520311+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:50.520311+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:50.532418+0000 mon.a (mon.0) 172 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:50.532418+0000 mon.a (mon.0) 172 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:50.580422+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:50.580422+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:57.868 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:51.388872+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:51.388872+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:51.475866+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:51.475866+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:52.394202+0000 mon.a (mon.0) 176 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:52.394202+0000 mon.a (mon.0) 176 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:52.394282+0000 mon.a (mon.0) 177 : cluster [INF] mon.a calling monitor election 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:52.394282+0000 mon.a (mon.0) 177 : cluster [INF] mon.a calling monitor election 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:52.395382+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:52.395382+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:52.395419+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:52.395419+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:52.396016+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:52.396016+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:52.580601+0000 mgr.y (mgr.14150) 36 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:52.580601+0000 mgr.y (mgr.14150) 36 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:53.389281+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:53.389281+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:54.389159+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:54.389159+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:54.390423+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:54.390423+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:54.580816+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:54.580816+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:55.389451+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:55.389451+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:56.389212+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:56.389212+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:57.389087+0000 mon.a (mon.0) 184 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: audit 2026-03-09T21:10:57.389087+0000 mon.a (mon.0) 184 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:57.398898+0000 mon.a (mon.0) 185 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:57.398898+0000 mon.a (mon.0) 185 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:57.402448+0000 mon.a (mon.0) 186 : cluster [DBG] monmap epoch 3 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:57.402448+0000 mon.a (mon.0) 186 : cluster [DBG] monmap epoch 3 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:57.402464+0000 mon.a (mon.0) 187 : cluster [DBG] fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:57.402464+0000 mon.a (mon.0) 187 : cluster [DBG] fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:57.402475+0000 mon.a (mon.0) 188 : cluster [DBG] last_changed 2026-03-09T21:10:52.389541+0000 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:57.402475+0000 mon.a (mon.0) 188 : cluster [DBG] last_changed 2026-03-09T21:10:52.389541+0000 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:57.402485+0000 mon.a (mon.0) 189 : cluster [DBG] created 2026-03-09T21:09:51.643158+0000 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:57.402485+0000 mon.a (mon.0) 189 : cluster [DBG] created 2026-03-09T21:09:51.643158+0000 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:57.402495+0000 mon.a (mon.0) 190 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:57.402495+0000 mon.a (mon.0) 190 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:57.402505+0000 mon.a (mon.0) 191 : cluster [DBG] election_strategy: 1 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:57.402505+0000 mon.a (mon.0) 191 : cluster [DBG] election_strategy: 1 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:57.402516+0000 mon.a (mon.0) 192 : cluster [DBG] 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:57.402516+0000 mon.a (mon.0) 192 : cluster [DBG] 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:57.402525+0000 mon.a (mon.0) 193 : cluster [DBG] 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.b 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:57.402525+0000 mon.a (mon.0) 193 : cluster [DBG] 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.b 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:57.402534+0000 mon.a (mon.0) 194 : cluster [DBG] 2: [v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0] mon.c 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:57.402534+0000 mon.a (mon.0) 194 : cluster [DBG] 2: [v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0] mon.c 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:57.402798+0000 mon.a (mon.0) 195 : cluster [DBG] fsmap 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:57.402798+0000 mon.a (mon.0) 195 : cluster [DBG] fsmap 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:57.402820+0000 mon.a (mon.0) 196 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:57.402820+0000 mon.a (mon.0) 196 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:57.402966+0000 mon.a (mon.0) 197 : cluster [DBG] mgrmap e13: y(active, since 42s) 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:57.402966+0000 mon.a (mon.0) 197 : cluster [DBG] mgrmap e13: y(active, since 42s) 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:57.403094+0000 mon.a (mon.0) 198 : cluster [INF] overall HEALTH_OK 2026-03-09T21:10:57.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:57 vm07 bash[28052]: cluster 2026-03-09T21:10:57.403094+0000 mon.a (mon.0) 198 : cluster [INF] overall HEALTH_OK 2026-03-09T21:10:57.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: audit 2026-03-09T21:10:52.394202+0000 mon.a (mon.0) 176 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: audit 2026-03-09T21:10:52.394202+0000 mon.a (mon.0) 176 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:52.394282+0000 mon.a (mon.0) 177 : cluster [INF] mon.a calling monitor election 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:52.394282+0000 mon.a (mon.0) 177 : cluster [INF] mon.a calling monitor election 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: audit 2026-03-09T21:10:52.395382+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: audit 2026-03-09T21:10:52.395382+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: audit 2026-03-09T21:10:52.395419+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: audit 2026-03-09T21:10:52.395419+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:52.396016+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:52.396016+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:52.580601+0000 mgr.y (mgr.14150) 36 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:52.580601+0000 mgr.y (mgr.14150) 36 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: audit 2026-03-09T21:10:53.389281+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: audit 2026-03-09T21:10:53.389281+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: audit 2026-03-09T21:10:54.389159+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: audit 2026-03-09T21:10:54.389159+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:54.390423+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:54.390423+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:54.580816+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:54.580816+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: audit 2026-03-09T21:10:55.389451+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: audit 2026-03-09T21:10:55.389451+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: audit 2026-03-09T21:10:56.389212+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: audit 2026-03-09T21:10:56.389212+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: audit 2026-03-09T21:10:57.389087+0000 mon.a (mon.0) 184 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: audit 2026-03-09T21:10:57.389087+0000 mon.a (mon.0) 184 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:57.398898+0000 mon.a (mon.0) 185 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:57.398898+0000 mon.a (mon.0) 185 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:57.402448+0000 mon.a (mon.0) 186 : cluster [DBG] monmap epoch 3 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:57.402448+0000 mon.a (mon.0) 186 : cluster [DBG] monmap epoch 3 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:57.402464+0000 mon.a (mon.0) 187 : cluster [DBG] fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:57.402464+0000 mon.a (mon.0) 187 : cluster [DBG] fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:57.402475+0000 mon.a (mon.0) 188 : cluster [DBG] last_changed 2026-03-09T21:10:52.389541+0000 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:57.402475+0000 mon.a (mon.0) 188 : cluster [DBG] last_changed 2026-03-09T21:10:52.389541+0000 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:57.402485+0000 mon.a (mon.0) 189 : cluster [DBG] created 2026-03-09T21:09:51.643158+0000 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:57.402485+0000 mon.a (mon.0) 189 : cluster [DBG] created 2026-03-09T21:09:51.643158+0000 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:57.402495+0000 mon.a (mon.0) 190 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:57.402495+0000 mon.a (mon.0) 190 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:57.402505+0000 mon.a (mon.0) 191 : cluster [DBG] election_strategy: 1 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:57.402505+0000 mon.a (mon.0) 191 : cluster [DBG] election_strategy: 1 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:57.402516+0000 mon.a (mon.0) 192 : cluster [DBG] 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:57.402516+0000 mon.a (mon.0) 192 : cluster [DBG] 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:57.402525+0000 mon.a (mon.0) 193 : cluster [DBG] 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.b 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:57.402525+0000 mon.a (mon.0) 193 : cluster [DBG] 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.b 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:57.402534+0000 mon.a (mon.0) 194 : cluster [DBG] 2: [v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0] mon.c 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:57.402534+0000 mon.a (mon.0) 194 : cluster [DBG] 2: [v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0] mon.c 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:57.402798+0000 mon.a (mon.0) 195 : cluster [DBG] fsmap 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:57.402798+0000 mon.a (mon.0) 195 : cluster [DBG] fsmap 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:57.402820+0000 mon.a (mon.0) 196 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:57.402820+0000 mon.a (mon.0) 196 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:57.402966+0000 mon.a (mon.0) 197 : cluster [DBG] mgrmap e13: y(active, since 42s) 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:57.402966+0000 mon.a (mon.0) 197 : cluster [DBG] mgrmap e13: y(active, since 42s) 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:57.403094+0000 mon.a (mon.0) 198 : cluster [INF] overall HEALTH_OK 2026-03-09T21:10:57.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:57 vm10 bash[23387]: cluster 2026-03-09T21:10:57.403094+0000 mon.a (mon.0) 198 : cluster [INF] overall HEALTH_OK 2026-03-09T21:10:58.635 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:10:58.635 INFO:teuthology.orchestra.run.vm10.stdout:{"epoch":3,"fsid":"22c897f4-1bfc-11f1-adaa-13127443f8b3","modified":"2026-03-09T21:10:52.389541Z","created":"2026-03-09T21:09:51.643158Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:3300","nonce":0},{"type":"v1","addr":"192.168.123.110:6789","nonce":0}]},"addr":"192.168.123.110:6789/0","public_addr":"192.168.123.110:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3301","nonce":0},{"type":"v1","addr":"192.168.123.107:6790","nonce":0}]},"addr":"192.168.123.107:6790/0","public_addr":"192.168.123.107:6790/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]} 2026-03-09T21:10:58.635 INFO:teuthology.orchestra.run.vm10.stderr:dumped monmap epoch 3 2026-03-09T21:10:58.716 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-09T21:10:58.716 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph config generate-minimal-conf 2026-03-09T21:10:58.722 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: cluster 2026-03-09T21:10:56.580975+0000 mgr.y (mgr.14150) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:58.722 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: cluster 2026-03-09T21:10:56.580975+0000 mgr.y (mgr.14150) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:58.722 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.543945+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.722 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.543945+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.722 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.622723+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.722 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.622723+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.722 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.632057+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.722 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.632057+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.722 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.637953+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.722 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.637953+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.722 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.659325+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.722 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.659325+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.722 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.660283+0000 mon.a (mon.0) 204 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:58.722 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.660283+0000 mon.a (mon.0) 204 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:58.722 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.660825+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:10:58.722 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.660825+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:10:58.722 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: cephadm 2026-03-09T21:10:57.661496+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-09T21:10:58.722 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: cephadm 2026-03-09T21:10:57.661496+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-09T21:10:58.722 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: cephadm 2026-03-09T21:10:57.661943+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm10:/etc/ceph/ceph.conf 2026-03-09T21:10:58.722 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: cephadm 2026-03-09T21:10:57.661943+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm10:/etc/ceph/ceph.conf 2026-03-09T21:10:58.722 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: cephadm 2026-03-09T21:10:57.698823+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm07:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:10:58.722 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: cephadm 2026-03-09T21:10:57.698823+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm07:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:10:58.722 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: cephadm 2026-03-09T21:10:57.700421+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Updating vm10:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:10:58.722 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: cephadm 2026-03-09T21:10:57.700421+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Updating vm10:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.737576+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.737576+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.743836+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.743836+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.747986+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.747986+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.752168+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.752168+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.756230+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.756230+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.769995+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.769995+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.775416+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.775416+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.780425+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.780425+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.785066+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.785066+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: cephadm 2026-03-09T21:10:57.785386+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: cephadm 2026-03-09T21:10:57.785386+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.785860+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.785860+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.787226+0000 mon.a (mon.0) 216 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.787226+0000 mon.a (mon.0) 216 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.787575+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:57.787575+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: cephadm 2026-03-09T21:10:57.788115+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm07 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: cephadm 2026-03-09T21:10:57.788115+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm07 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:58.148490+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:58.148490+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:58.154040+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:58.154040+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: cephadm 2026-03-09T21:10:58.154720+0000 mgr.y (mgr.14150) 45 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: cephadm 2026-03-09T21:10:58.154720+0000 mgr.y (mgr.14150) 45 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:58.155125+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:58.155125+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:58.155584+0000 mon.a (mon.0) 221 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:58.155584+0000 mon.a (mon.0) 221 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:58.155997+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:58.155997+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: cephadm 2026-03-09T21:10:58.156455+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring daemon mon.a on vm07 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: cephadm 2026-03-09T21:10:58.156455+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring daemon mon.a on vm07 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:58.391205+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:58.391205+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:58.506982+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:58.506982+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:58.513769+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:58.513769+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:58.515696+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:58.515696+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:58.516506+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:58.516506+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:58.516902+0000 mon.a (mon.0) 228 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:58 vm07 bash[20771]: audit 2026-03-09T21:10:58.516902+0000 mon.a (mon.0) 228 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: cluster 2026-03-09T21:10:56.580975+0000 mgr.y (mgr.14150) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: cluster 2026-03-09T21:10:56.580975+0000 mgr.y (mgr.14150) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.543945+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.543945+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.723 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.622723+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.622723+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.632057+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.632057+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.637953+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.637953+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.659325+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.659325+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.660283+0000 mon.a (mon.0) 204 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.660283+0000 mon.a (mon.0) 204 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.660825+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.660825+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: cephadm 2026-03-09T21:10:57.661496+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: cephadm 2026-03-09T21:10:57.661496+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: cephadm 2026-03-09T21:10:57.661943+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm10:/etc/ceph/ceph.conf 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: cephadm 2026-03-09T21:10:57.661943+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm10:/etc/ceph/ceph.conf 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: cephadm 2026-03-09T21:10:57.698823+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm07:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: cephadm 2026-03-09T21:10:57.698823+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm07:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: cephadm 2026-03-09T21:10:57.700421+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Updating vm10:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: cephadm 2026-03-09T21:10:57.700421+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Updating vm10:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.737576+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.737576+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.743836+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.743836+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.747986+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.747986+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.752168+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.752168+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.756230+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.756230+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.769995+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.769995+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.775416+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.775416+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.780425+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.780425+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.785066+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.785066+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: cephadm 2026-03-09T21:10:57.785386+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: cephadm 2026-03-09T21:10:57.785386+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.785860+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.785860+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.787226+0000 mon.a (mon.0) 216 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.787226+0000 mon.a (mon.0) 216 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.787575+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:57.787575+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: cephadm 2026-03-09T21:10:57.788115+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm07 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: cephadm 2026-03-09T21:10:57.788115+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm07 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:58.148490+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:58.148490+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:58.154040+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:58.154040+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: cephadm 2026-03-09T21:10:58.154720+0000 mgr.y (mgr.14150) 45 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: cephadm 2026-03-09T21:10:58.154720+0000 mgr.y (mgr.14150) 45 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:58.155125+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:58.155125+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:58.155584+0000 mon.a (mon.0) 221 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:58.155584+0000 mon.a (mon.0) 221 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:58.155997+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:58.155997+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: cephadm 2026-03-09T21:10:58.156455+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring daemon mon.a on vm07 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: cephadm 2026-03-09T21:10:58.156455+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring daemon mon.a on vm07 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:58.391205+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:58.724 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:58.391205+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:58.725 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:58.506982+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.725 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:58.506982+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.725 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:58.513769+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.725 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:58.513769+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.725 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:58.515696+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T21:10:58.725 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:58.515696+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T21:10:58.725 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:58.516506+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T21:10:58.725 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:58.516506+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T21:10:58.725 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:58.516902+0000 mon.a (mon.0) 228 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:58.725 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:58 vm07 bash[28052]: audit 2026-03-09T21:10:58.516902+0000 mon.a (mon.0) 228 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:58.857 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: cluster 2026-03-09T21:10:56.580975+0000 mgr.y (mgr.14150) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:58.857 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: cluster 2026-03-09T21:10:56.580975+0000 mgr.y (mgr.14150) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.543945+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.543945+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.622723+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.622723+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.632057+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.632057+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.637953+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.637953+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.659325+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.659325+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.660283+0000 mon.a (mon.0) 204 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.660283+0000 mon.a (mon.0) 204 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.660825+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.660825+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: cephadm 2026-03-09T21:10:57.661496+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: cephadm 2026-03-09T21:10:57.661496+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: cephadm 2026-03-09T21:10:57.661943+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm10:/etc/ceph/ceph.conf 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: cephadm 2026-03-09T21:10:57.661943+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm10:/etc/ceph/ceph.conf 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: cephadm 2026-03-09T21:10:57.698823+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm07:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: cephadm 2026-03-09T21:10:57.698823+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm07:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: cephadm 2026-03-09T21:10:57.700421+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Updating vm10:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: cephadm 2026-03-09T21:10:57.700421+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Updating vm10:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.737576+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.737576+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.743836+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.743836+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.747986+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.747986+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.752168+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.752168+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.756230+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.756230+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.769995+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.769995+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.775416+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.775416+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.780425+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.780425+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.785066+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.785066+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: cephadm 2026-03-09T21:10:57.785386+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: cephadm 2026-03-09T21:10:57.785386+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.785860+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.785860+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.787226+0000 mon.a (mon.0) 216 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.787226+0000 mon.a (mon.0) 216 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.787575+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:57.787575+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: cephadm 2026-03-09T21:10:57.788115+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm07 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: cephadm 2026-03-09T21:10:57.788115+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm07 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:58.148490+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:58.148490+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:58.154040+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:58.154040+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: cephadm 2026-03-09T21:10:58.154720+0000 mgr.y (mgr.14150) 45 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: cephadm 2026-03-09T21:10:58.154720+0000 mgr.y (mgr.14150) 45 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:58.155125+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:58.155125+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:58.155584+0000 mon.a (mon.0) 221 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T21:10:58.858 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:58.155584+0000 mon.a (mon.0) 221 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T21:10:58.859 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:58.155997+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:58.859 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:58.155997+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:58.859 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: cephadm 2026-03-09T21:10:58.156455+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring daemon mon.a on vm07 2026-03-09T21:10:58.859 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: cephadm 2026-03-09T21:10:58.156455+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring daemon mon.a on vm07 2026-03-09T21:10:58.859 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:58.391205+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:58.859 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:58.391205+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:10:58.859 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:58.506982+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.859 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:58.506982+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.859 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:58.513769+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.859 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:58.513769+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:58.859 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:58.515696+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T21:10:58.859 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:58.515696+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T21:10:58.859 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:58.516506+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T21:10:58.859 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:58.516506+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T21:10:58.859 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:58.516902+0000 mon.a (mon.0) 228 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:58.859 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:58 vm10 bash[23387]: audit 2026-03-09T21:10:58.516902+0000 mon.a (mon.0) 228 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:59.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:59 vm07 bash[20771]: cephadm 2026-03-09T21:10:58.515493+0000 mgr.y (mgr.14150) 47 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T21:10:59.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:59 vm07 bash[20771]: cephadm 2026-03-09T21:10:58.515493+0000 mgr.y (mgr.14150) 47 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T21:10:59.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:59 vm07 bash[20771]: cephadm 2026-03-09T21:10:58.517398+0000 mgr.y (mgr.14150) 48 : cephadm [INF] Reconfiguring daemon mon.b on vm10 2026-03-09T21:10:59.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:59 vm07 bash[20771]: cephadm 2026-03-09T21:10:58.517398+0000 mgr.y (mgr.14150) 48 : cephadm [INF] Reconfiguring daemon mon.b on vm10 2026-03-09T21:10:59.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:59 vm07 bash[20771]: cluster 2026-03-09T21:10:58.581204+0000 mgr.y (mgr.14150) 49 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:59.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:59 vm07 bash[20771]: cluster 2026-03-09T21:10:58.581204+0000 mgr.y (mgr.14150) 49 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:59.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:59 vm07 bash[20771]: audit 2026-03-09T21:10:58.634354+0000 mon.a (mon.0) 229 : audit [DBG] from='client.? 192.168.123.110:0/4104896788' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T21:10:59.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:59 vm07 bash[20771]: audit 2026-03-09T21:10:58.634354+0000 mon.a (mon.0) 229 : audit [DBG] from='client.? 192.168.123.110:0/4104896788' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T21:10:59.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:59 vm07 bash[20771]: audit 2026-03-09T21:10:58.906597+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:59.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:59 vm07 bash[20771]: audit 2026-03-09T21:10:58.906597+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:59.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:59 vm07 bash[20771]: audit 2026-03-09T21:10:58.913427+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:59.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:59 vm07 bash[20771]: audit 2026-03-09T21:10:58.913427+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:59.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:59 vm07 bash[20771]: audit 2026-03-09T21:10:58.915498+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:59.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:59 vm07 bash[20771]: audit 2026-03-09T21:10:58.915498+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:59.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:59 vm07 bash[20771]: audit 2026-03-09T21:10:58.916620+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:59.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:59 vm07 bash[20771]: audit 2026-03-09T21:10:58.916620+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:59.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:59 vm07 bash[20771]: audit 2026-03-09T21:10:58.917252+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:10:59.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:59 vm07 bash[20771]: audit 2026-03-09T21:10:58.917252+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:10:59.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:59 vm07 bash[20771]: audit 2026-03-09T21:10:58.921788+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:59.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:10:59 vm07 bash[20771]: audit 2026-03-09T21:10:58.921788+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:59.866 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:10:59 vm07 bash[21040]: debug 2026-03-09T21:10:59.387+0000 7fc57dd7d640 -1 mgr.server handle_report got status from non-daemon mon.c 2026-03-09T21:10:59.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:59 vm07 bash[28052]: cephadm 2026-03-09T21:10:58.515493+0000 mgr.y (mgr.14150) 47 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T21:10:59.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:59 vm07 bash[28052]: cephadm 2026-03-09T21:10:58.515493+0000 mgr.y (mgr.14150) 47 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T21:10:59.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:59 vm07 bash[28052]: cephadm 2026-03-09T21:10:58.517398+0000 mgr.y (mgr.14150) 48 : cephadm [INF] Reconfiguring daemon mon.b on vm10 2026-03-09T21:10:59.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:59 vm07 bash[28052]: cephadm 2026-03-09T21:10:58.517398+0000 mgr.y (mgr.14150) 48 : cephadm [INF] Reconfiguring daemon mon.b on vm10 2026-03-09T21:10:59.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:59 vm07 bash[28052]: cluster 2026-03-09T21:10:58.581204+0000 mgr.y (mgr.14150) 49 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:59.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:59 vm07 bash[28052]: cluster 2026-03-09T21:10:58.581204+0000 mgr.y (mgr.14150) 49 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:59.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:59 vm07 bash[28052]: audit 2026-03-09T21:10:58.634354+0000 mon.a (mon.0) 229 : audit [DBG] from='client.? 192.168.123.110:0/4104896788' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T21:10:59.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:59 vm07 bash[28052]: audit 2026-03-09T21:10:58.634354+0000 mon.a (mon.0) 229 : audit [DBG] from='client.? 192.168.123.110:0/4104896788' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T21:10:59.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:59 vm07 bash[28052]: audit 2026-03-09T21:10:58.906597+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:59.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:59 vm07 bash[28052]: audit 2026-03-09T21:10:58.906597+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:59.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:59 vm07 bash[28052]: audit 2026-03-09T21:10:58.913427+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:59.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:59 vm07 bash[28052]: audit 2026-03-09T21:10:58.913427+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:59.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:59 vm07 bash[28052]: audit 2026-03-09T21:10:58.915498+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:59.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:59 vm07 bash[28052]: audit 2026-03-09T21:10:58.915498+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:59.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:59 vm07 bash[28052]: audit 2026-03-09T21:10:58.916620+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:59.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:59 vm07 bash[28052]: audit 2026-03-09T21:10:58.916620+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:59.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:59 vm07 bash[28052]: audit 2026-03-09T21:10:58.917252+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:10:59.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:59 vm07 bash[28052]: audit 2026-03-09T21:10:58.917252+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:10:59.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:59 vm07 bash[28052]: audit 2026-03-09T21:10:58.921788+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:59.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:10:59 vm07 bash[28052]: audit 2026-03-09T21:10:58.921788+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:59.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:59 vm10 bash[23387]: cephadm 2026-03-09T21:10:58.515493+0000 mgr.y (mgr.14150) 47 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T21:10:59.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:59 vm10 bash[23387]: cephadm 2026-03-09T21:10:58.515493+0000 mgr.y (mgr.14150) 47 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T21:10:59.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:59 vm10 bash[23387]: cephadm 2026-03-09T21:10:58.517398+0000 mgr.y (mgr.14150) 48 : cephadm [INF] Reconfiguring daemon mon.b on vm10 2026-03-09T21:10:59.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:59 vm10 bash[23387]: cephadm 2026-03-09T21:10:58.517398+0000 mgr.y (mgr.14150) 48 : cephadm [INF] Reconfiguring daemon mon.b on vm10 2026-03-09T21:10:59.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:59 vm10 bash[23387]: cluster 2026-03-09T21:10:58.581204+0000 mgr.y (mgr.14150) 49 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:59.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:59 vm10 bash[23387]: cluster 2026-03-09T21:10:58.581204+0000 mgr.y (mgr.14150) 49 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:10:59.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:59 vm10 bash[23387]: audit 2026-03-09T21:10:58.634354+0000 mon.a (mon.0) 229 : audit [DBG] from='client.? 192.168.123.110:0/4104896788' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T21:10:59.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:59 vm10 bash[23387]: audit 2026-03-09T21:10:58.634354+0000 mon.a (mon.0) 229 : audit [DBG] from='client.? 192.168.123.110:0/4104896788' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T21:10:59.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:59 vm10 bash[23387]: audit 2026-03-09T21:10:58.906597+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:59.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:59 vm10 bash[23387]: audit 2026-03-09T21:10:58.906597+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:59.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:59 vm10 bash[23387]: audit 2026-03-09T21:10:58.913427+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:59.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:59 vm10 bash[23387]: audit 2026-03-09T21:10:58.913427+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:59.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:59 vm10 bash[23387]: audit 2026-03-09T21:10:58.915498+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:59.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:59 vm10 bash[23387]: audit 2026-03-09T21:10:58.915498+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:10:59.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:59 vm10 bash[23387]: audit 2026-03-09T21:10:58.916620+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:59.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:59 vm10 bash[23387]: audit 2026-03-09T21:10:58.916620+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:10:59.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:59 vm10 bash[23387]: audit 2026-03-09T21:10:58.917252+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:10:59.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:59 vm10 bash[23387]: audit 2026-03-09T21:10:58.917252+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:10:59.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:59 vm10 bash[23387]: audit 2026-03-09T21:10:58.921788+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:10:59.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:10:59 vm10 bash[23387]: audit 2026-03-09T21:10:58.921788+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:01.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:01 vm07 bash[20771]: cluster 2026-03-09T21:11:00.581396+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:01.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:01 vm07 bash[20771]: cluster 2026-03-09T21:11:00.581396+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:01.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:01 vm07 bash[28052]: cluster 2026-03-09T21:11:00.581396+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:01.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:01 vm07 bash[28052]: cluster 2026-03-09T21:11:00.581396+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:01.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:01 vm10 bash[23387]: cluster 2026-03-09T21:11:00.581396+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:01.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:01 vm10 bash[23387]: cluster 2026-03-09T21:11:00.581396+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:03.330 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:11:03.584 INFO:teuthology.orchestra.run.vm07.stdout:# minimal ceph.conf for 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:11:03.584 INFO:teuthology.orchestra.run.vm07.stdout:[global] 2026-03-09T21:11:03.584 INFO:teuthology.orchestra.run.vm07.stdout: fsid = 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:11:03.584 INFO:teuthology.orchestra.run.vm07.stdout: mon_host = [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] [v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0] 2026-03-09T21:11:03.632 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-09T21:11:03.633 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T21:11:03.633 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/ceph/ceph.conf 2026-03-09T21:11:03.680 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T21:11:03.680 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T21:11:03.728 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-09T21:11:03.728 DEBUG:teuthology.orchestra.run.vm10:> sudo dd of=/etc/ceph/ceph.conf 2026-03-09T21:11:03.735 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-09T21:11:03.735 DEBUG:teuthology.orchestra.run.vm10:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T21:11:03.788 INFO:tasks.cephadm:Adding mgr.y on vm07 2026-03-09T21:11:03.788 INFO:tasks.cephadm:Adding mgr.x on vm10 2026-03-09T21:11:03.789 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph orch apply mgr '2;vm07=y;vm10=x' 2026-03-09T21:11:03.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:03 vm07 bash[20771]: cluster 2026-03-09T21:11:02.581580+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:03.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:03 vm07 bash[20771]: cluster 2026-03-09T21:11:02.581580+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:03.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:03 vm07 bash[20771]: audit 2026-03-09T21:11:03.583159+0000 mon.c (mon.2) 2 : audit [DBG] from='client.? 192.168.123.107:0/2010006752' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:03.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:03 vm07 bash[20771]: audit 2026-03-09T21:11:03.583159+0000 mon.c (mon.2) 2 : audit [DBG] from='client.? 192.168.123.107:0/2010006752' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:03.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:03 vm07 bash[28052]: cluster 2026-03-09T21:11:02.581580+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:03.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:03 vm07 bash[28052]: cluster 2026-03-09T21:11:02.581580+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:03.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:03 vm07 bash[28052]: audit 2026-03-09T21:11:03.583159+0000 mon.c (mon.2) 2 : audit [DBG] from='client.? 192.168.123.107:0/2010006752' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:03.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:03 vm07 bash[28052]: audit 2026-03-09T21:11:03.583159+0000 mon.c (mon.2) 2 : audit [DBG] from='client.? 192.168.123.107:0/2010006752' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:03.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:03 vm10 bash[23387]: cluster 2026-03-09T21:11:02.581580+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:03.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:03 vm10 bash[23387]: cluster 2026-03-09T21:11:02.581580+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:03.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:03 vm10 bash[23387]: audit 2026-03-09T21:11:03.583159+0000 mon.c (mon.2) 2 : audit [DBG] from='client.? 192.168.123.107:0/2010006752' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:03.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:03 vm10 bash[23387]: audit 2026-03-09T21:11:03.583159+0000 mon.c (mon.2) 2 : audit [DBG] from='client.? 192.168.123.107:0/2010006752' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:05.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:05 vm10 bash[23387]: cluster 2026-03-09T21:11:04.581758+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:05.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:05 vm10 bash[23387]: cluster 2026-03-09T21:11:04.581758+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:06.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:05 vm07 bash[20771]: cluster 2026-03-09T21:11:04.581758+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:06.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:05 vm07 bash[20771]: cluster 2026-03-09T21:11:04.581758+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:06.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:05 vm07 bash[28052]: cluster 2026-03-09T21:11:04.581758+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:06.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:05 vm07 bash[28052]: cluster 2026-03-09T21:11:04.581758+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:07.433 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.b/config 2026-03-09T21:11:07.698 INFO:teuthology.orchestra.run.vm10.stdout:Scheduled mgr update... 2026-03-09T21:11:07.708 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:07 vm10 bash[23387]: cluster 2026-03-09T21:11:06.581923+0000 mgr.y (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:07.708 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:07 vm10 bash[23387]: cluster 2026-03-09T21:11:06.581923+0000 mgr.y (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:07.768 DEBUG:teuthology.orchestra.run.vm10:mgr.x> sudo journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@mgr.x.service 2026-03-09T21:11:07.769 INFO:tasks.cephadm:Deploying OSDs... 2026-03-09T21:11:07.769 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T21:11:07.769 DEBUG:teuthology.orchestra.run.vm07:> dd if=/scratch_devs of=/dev/stdout 2026-03-09T21:11:07.772 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T21:11:07.772 DEBUG:teuthology.orchestra.run.vm07:> ls /dev/[sv]d? 2026-03-09T21:11:07.816 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vda 2026-03-09T21:11:07.816 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vdb 2026-03-09T21:11:07.816 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vdc 2026-03-09T21:11:07.816 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vdd 2026-03-09T21:11:07.816 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vde 2026-03-09T21:11:07.816 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-09T21:11:07.816 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-09T21:11:07.816 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vdb 2026-03-09T21:11:07.861 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vdb 2026-03-09T21:11:07.861 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T21:11:07.861 INFO:teuthology.orchestra.run.vm07.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-09T21:11:07.861 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T21:11:07.861 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-09 21:03:47.750401474 +0000 2026-03-09T21:11:07.861 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-09 21:03:46.662401474 +0000 2026-03-09T21:11:07.861 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-09 21:03:46.662401474 +0000 2026-03-09T21:11:07.861 INFO:teuthology.orchestra.run.vm07.stdout: Birth: - 2026-03-09T21:11:07.861 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-09T21:11:07.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:07 vm07 bash[28052]: cluster 2026-03-09T21:11:06.581923+0000 mgr.y (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:07.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:07 vm07 bash[28052]: cluster 2026-03-09T21:11:06.581923+0000 mgr.y (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:07.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:07 vm07 bash[20771]: cluster 2026-03-09T21:11:06.581923+0000 mgr.y (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:07.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:07 vm07 bash[20771]: cluster 2026-03-09T21:11:06.581923+0000 mgr.y (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:07.908 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-09T21:11:07.908 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-09T21:11:07.908 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 0.000155852 s, 3.3 MB/s 2026-03-09T21:11:07.909 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-09T21:11:07.953 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vdc 2026-03-09T21:11:07.964 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:07 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:11:07.996 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vdc 2026-03-09T21:11:07.996 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T21:11:07.996 INFO:teuthology.orchestra.run.vm07.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-09T21:11:07.996 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T21:11:07.996 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-09 21:03:47.758401474 +0000 2026-03-09T21:11:07.996 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-09 21:03:46.638401474 +0000 2026-03-09T21:11:07.996 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-09 21:03:46.638401474 +0000 2026-03-09T21:11:07.996 INFO:teuthology.orchestra.run.vm07.stdout: Birth: - 2026-03-09T21:11:07.996 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-09T21:11:08.044 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-09T21:11:08.044 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-09T21:11:08.044 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 0.000193582 s, 2.6 MB/s 2026-03-09T21:11:08.045 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-09T21:11:08.089 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vdd 2026-03-09T21:11:08.136 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vdd 2026-03-09T21:11:08.136 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T21:11:08.136 INFO:teuthology.orchestra.run.vm07.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-09T21:11:08.136 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T21:11:08.136 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-09 21:03:47.750401474 +0000 2026-03-09T21:11:08.136 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-09 21:03:46.694401474 +0000 2026-03-09T21:11:08.136 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-09 21:03:46.694401474 +0000 2026-03-09T21:11:08.136 INFO:teuthology.orchestra.run.vm07.stdout: Birth: - 2026-03-09T21:11:08.136 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-09T21:11:08.183 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-09T21:11:08.183 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-09T21:11:08.183 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 0.000157405 s, 3.3 MB/s 2026-03-09T21:11:08.184 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-09T21:11:08.229 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vde 2026-03-09T21:11:08.272 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vde 2026-03-09T21:11:08.273 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T21:11:08.273 INFO:teuthology.orchestra.run.vm07.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-09T21:11:08.273 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T21:11:08.273 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-09 21:03:47.758401474 +0000 2026-03-09T21:11:08.273 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-09 21:03:46.694401474 +0000 2026-03-09T21:11:08.273 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-09 21:03:46.694401474 +0000 2026-03-09T21:11:08.273 INFO:teuthology.orchestra.run.vm07.stdout: Birth: - 2026-03-09T21:11:08.273 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-09T21:11:08.321 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-09T21:11:08.321 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-09T21:11:08.321 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 0.000141455 s, 3.6 MB/s 2026-03-09T21:11:08.321 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-09T21:11:08.371 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-09T21:11:08.371 DEBUG:teuthology.orchestra.run.vm10:> dd if=/scratch_devs of=/dev/stdout 2026-03-09T21:11:08.373 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T21:11:08.373 DEBUG:teuthology.orchestra.run.vm10:> ls /dev/[sv]d? 2026-03-09T21:11:08.419 INFO:teuthology.orchestra.run.vm10.stdout:/dev/vda 2026-03-09T21:11:08.420 INFO:teuthology.orchestra.run.vm10.stdout:/dev/vdb 2026-03-09T21:11:08.420 INFO:teuthology.orchestra.run.vm10.stdout:/dev/vdc 2026-03-09T21:11:08.420 INFO:teuthology.orchestra.run.vm10.stdout:/dev/vdd 2026-03-09T21:11:08.420 INFO:teuthology.orchestra.run.vm10.stdout:/dev/vde 2026-03-09T21:11:08.420 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-09T21:11:08.420 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-09T21:11:08.420 DEBUG:teuthology.orchestra.run.vm10:> stat /dev/vdb 2026-03-09T21:11:08.463 INFO:teuthology.orchestra.run.vm10.stdout: File: /dev/vdb 2026-03-09T21:11:08.464 INFO:teuthology.orchestra.run.vm10.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T21:11:08.464 INFO:teuthology.orchestra.run.vm10.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-09T21:11:08.464 INFO:teuthology.orchestra.run.vm10.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T21:11:08.464 INFO:teuthology.orchestra.run.vm10.stdout:Access: 2026-03-09 21:04:19.383632573 +0000 2026-03-09T21:11:08.464 INFO:teuthology.orchestra.run.vm10.stdout:Modify: 2026-03-09 21:04:18.055632573 +0000 2026-03-09T21:11:08.464 INFO:teuthology.orchestra.run.vm10.stdout:Change: 2026-03-09 21:04:18.055632573 +0000 2026-03-09T21:11:08.464 INFO:teuthology.orchestra.run.vm10.stdout: Birth: - 2026-03-09T21:11:08.464 DEBUG:teuthology.orchestra.run.vm10:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-09T21:11:08.491 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:11:08.492 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:11:08.492 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:08 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:11:08.492 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:08 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:11:08.492 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:08 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:11:08.492 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:08 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:11:08.492 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:08 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:11:08.499 INFO:teuthology.orchestra.run.vm10.stderr:1+0 records in 2026-03-09T21:11:08.499 INFO:teuthology.orchestra.run.vm10.stderr:1+0 records out 2026-03-09T21:11:08.499 INFO:teuthology.orchestra.run.vm10.stderr:512 bytes copied, 0.000108703 s, 4.7 MB/s 2026-03-09T21:11:08.499 DEBUG:teuthology.orchestra.run.vm10:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-09T21:11:08.550 DEBUG:teuthology.orchestra.run.vm10:> stat /dev/vdc 2026-03-09T21:11:08.606 INFO:teuthology.orchestra.run.vm10.stdout: File: /dev/vdc 2026-03-09T21:11:08.606 INFO:teuthology.orchestra.run.vm10.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T21:11:08.606 INFO:teuthology.orchestra.run.vm10.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-09T21:11:08.606 INFO:teuthology.orchestra.run.vm10.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T21:11:08.606 INFO:teuthology.orchestra.run.vm10.stdout:Access: 2026-03-09 21:04:19.391632573 +0000 2026-03-09T21:11:08.606 INFO:teuthology.orchestra.run.vm10.stdout:Modify: 2026-03-09 21:04:18.035632573 +0000 2026-03-09T21:11:08.606 INFO:teuthology.orchestra.run.vm10.stdout:Change: 2026-03-09 21:04:18.035632573 +0000 2026-03-09T21:11:08.606 INFO:teuthology.orchestra.run.vm10.stdout: Birth: - 2026-03-09T21:11:08.607 DEBUG:teuthology.orchestra.run.vm10:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-09T21:11:08.659 INFO:teuthology.orchestra.run.vm10.stderr:1+0 records in 2026-03-09T21:11:08.659 INFO:teuthology.orchestra.run.vm10.stderr:1+0 records out 2026-03-09T21:11:08.659 INFO:teuthology.orchestra.run.vm10.stderr:512 bytes copied, 0.00112219 s, 456 kB/s 2026-03-09T21:11:08.660 DEBUG:teuthology.orchestra.run.vm10:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-09T21:11:08.722 DEBUG:teuthology.orchestra.run.vm10:> stat /dev/vdd 2026-03-09T21:11:08.745 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:08 vm10 systemd[1]: Started Ceph mgr.x for 22c897f4-1bfc-11f1-adaa-13127443f8b3. 2026-03-09T21:11:08.756 INFO:teuthology.orchestra.run.vm10.stdout: File: /dev/vdd 2026-03-09T21:11:08.760 INFO:teuthology.orchestra.run.vm10.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T21:11:08.761 INFO:teuthology.orchestra.run.vm10.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-09T21:11:08.761 INFO:teuthology.orchestra.run.vm10.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T21:11:08.761 INFO:teuthology.orchestra.run.vm10.stdout:Access: 2026-03-09 21:04:19.383632573 +0000 2026-03-09T21:11:08.761 INFO:teuthology.orchestra.run.vm10.stdout:Modify: 2026-03-09 21:04:18.011632573 +0000 2026-03-09T21:11:08.761 INFO:teuthology.orchestra.run.vm10.stdout:Change: 2026-03-09 21:04:18.011632573 +0000 2026-03-09T21:11:08.761 INFO:teuthology.orchestra.run.vm10.stdout: Birth: - 2026-03-09T21:11:08.761 DEBUG:teuthology.orchestra.run.vm10:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-09T21:11:08.780 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:08 vm10 bash[24097]: debug 2026-03-09T21:11:08.740+0000 7fcf88aa2140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T21:11:08.817 INFO:teuthology.orchestra.run.vm10.stderr:1+0 records in 2026-03-09T21:11:08.818 INFO:teuthology.orchestra.run.vm10.stderr:1+0 records out 2026-03-09T21:11:08.818 INFO:teuthology.orchestra.run.vm10.stderr:512 bytes copied, 0.00732915 s, 69.9 kB/s 2026-03-09T21:11:08.818 DEBUG:teuthology.orchestra.run.vm10:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-09T21:11:08.869 DEBUG:teuthology.orchestra.run.vm10:> stat /dev/vde 2026-03-09T21:11:08.916 INFO:teuthology.orchestra.run.vm10.stdout: File: /dev/vde 2026-03-09T21:11:08.916 INFO:teuthology.orchestra.run.vm10.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T21:11:08.916 INFO:teuthology.orchestra.run.vm10.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-09T21:11:08.916 INFO:teuthology.orchestra.run.vm10.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T21:11:08.916 INFO:teuthology.orchestra.run.vm10.stdout:Access: 2026-03-09 21:04:19.391632573 +0000 2026-03-09T21:11:08.916 INFO:teuthology.orchestra.run.vm10.stdout:Modify: 2026-03-09 21:04:18.031632573 +0000 2026-03-09T21:11:08.916 INFO:teuthology.orchestra.run.vm10.stdout:Change: 2026-03-09 21:04:18.031632573 +0000 2026-03-09T21:11:08.916 INFO:teuthology.orchestra.run.vm10.stdout: Birth: - 2026-03-09T21:11:08.916 DEBUG:teuthology.orchestra.run.vm10:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-09T21:11:08.963 INFO:teuthology.orchestra.run.vm10.stderr:1+0 records in 2026-03-09T21:11:08.963 INFO:teuthology.orchestra.run.vm10.stderr:1+0 records out 2026-03-09T21:11:08.963 INFO:teuthology.orchestra.run.vm10.stderr:512 bytes copied, 0.000155632 s, 3.3 MB/s 2026-03-09T21:11:08.964 DEBUG:teuthology.orchestra.run.vm10:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-09T21:11:09.014 INFO:tasks.cephadm:Deploying osd.0 on vm07 with /dev/vde... 2026-03-09T21:11:09.014 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- lvm zap /dev/vde 2026-03-09T21:11:09.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: audit 2026-03-09T21:11:07.690615+0000 mgr.y (mgr.14150) 54 : audit [DBG] from='client.24103 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm07=y;vm10=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:11:09.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: audit 2026-03-09T21:11:07.690615+0000 mgr.y (mgr.14150) 54 : audit [DBG] from='client.24103 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm07=y;vm10=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:11:09.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: cephadm 2026-03-09T21:11:07.691538+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Saving service mgr spec with placement vm07=y;vm10=x;count:2 2026-03-09T21:11:09.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: cephadm 2026-03-09T21:11:07.691538+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Saving service mgr spec with placement vm07=y;vm10=x;count:2 2026-03-09T21:11:09.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: audit 2026-03-09T21:11:07.696821+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: audit 2026-03-09T21:11:07.696821+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: audit 2026-03-09T21:11:07.697418+0000 mon.a (mon.0) 237 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:11:09.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: audit 2026-03-09T21:11:07.697418+0000 mon.a (mon.0) 237 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:11:09.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: audit 2026-03-09T21:11:07.698482+0000 mon.a (mon.0) 238 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:09.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: audit 2026-03-09T21:11:07.698482+0000 mon.a (mon.0) 238 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:09.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: audit 2026-03-09T21:11:07.698840+0000 mon.a (mon.0) 239 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:11:09.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: audit 2026-03-09T21:11:07.698840+0000 mon.a (mon.0) 239 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:11:09.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: audit 2026-03-09T21:11:07.704301+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: audit 2026-03-09T21:11:07.704301+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: audit 2026-03-09T21:11:07.705561+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T21:11:09.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: audit 2026-03-09T21:11:07.705561+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T21:11:09.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: audit 2026-03-09T21:11:07.707922+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T21:11:09.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: audit 2026-03-09T21:11:07.707922+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T21:11:09.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: audit 2026-03-09T21:11:07.711688+0000 mon.a (mon.0) 243 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T21:11:09.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: audit 2026-03-09T21:11:07.711688+0000 mon.a (mon.0) 243 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T21:11:09.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: audit 2026-03-09T21:11:07.712306+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:09.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: audit 2026-03-09T21:11:07.712306+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:09.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: cephadm 2026-03-09T21:11:07.712804+0000 mgr.y (mgr.14150) 56 : cephadm [INF] Deploying daemon mgr.x on vm10 2026-03-09T21:11:09.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: cephadm 2026-03-09T21:11:07.712804+0000 mgr.y (mgr.14150) 56 : cephadm [INF] Deploying daemon mgr.x on vm10 2026-03-09T21:11:09.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: audit 2026-03-09T21:11:08.534063+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: audit 2026-03-09T21:11:08.534063+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: audit 2026-03-09T21:11:08.541413+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: audit 2026-03-09T21:11:08.541413+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: audit 2026-03-09T21:11:08.547148+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: audit 2026-03-09T21:11:08.547148+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: audit 2026-03-09T21:11:08.554227+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: audit 2026-03-09T21:11:08.554227+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: audit 2026-03-09T21:11:08.565896+0000 mon.a (mon.0) 249 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:11:09.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:08 vm07 bash[28052]: audit 2026-03-09T21:11:08.565896+0000 mon.a (mon.0) 249 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: audit 2026-03-09T21:11:07.690615+0000 mgr.y (mgr.14150) 54 : audit [DBG] from='client.24103 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm07=y;vm10=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: audit 2026-03-09T21:11:07.690615+0000 mgr.y (mgr.14150) 54 : audit [DBG] from='client.24103 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm07=y;vm10=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: cephadm 2026-03-09T21:11:07.691538+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Saving service mgr spec with placement vm07=y;vm10=x;count:2 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: cephadm 2026-03-09T21:11:07.691538+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Saving service mgr spec with placement vm07=y;vm10=x;count:2 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: audit 2026-03-09T21:11:07.696821+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: audit 2026-03-09T21:11:07.696821+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: audit 2026-03-09T21:11:07.697418+0000 mon.a (mon.0) 237 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: audit 2026-03-09T21:11:07.697418+0000 mon.a (mon.0) 237 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: audit 2026-03-09T21:11:07.698482+0000 mon.a (mon.0) 238 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: audit 2026-03-09T21:11:07.698482+0000 mon.a (mon.0) 238 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: audit 2026-03-09T21:11:07.698840+0000 mon.a (mon.0) 239 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: audit 2026-03-09T21:11:07.698840+0000 mon.a (mon.0) 239 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: audit 2026-03-09T21:11:07.704301+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: audit 2026-03-09T21:11:07.704301+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: audit 2026-03-09T21:11:07.705561+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: audit 2026-03-09T21:11:07.705561+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: audit 2026-03-09T21:11:07.707922+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: audit 2026-03-09T21:11:07.707922+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: audit 2026-03-09T21:11:07.711688+0000 mon.a (mon.0) 243 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: audit 2026-03-09T21:11:07.711688+0000 mon.a (mon.0) 243 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: audit 2026-03-09T21:11:07.712306+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: audit 2026-03-09T21:11:07.712306+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: cephadm 2026-03-09T21:11:07.712804+0000 mgr.y (mgr.14150) 56 : cephadm [INF] Deploying daemon mgr.x on vm10 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: cephadm 2026-03-09T21:11:07.712804+0000 mgr.y (mgr.14150) 56 : cephadm [INF] Deploying daemon mgr.x on vm10 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: audit 2026-03-09T21:11:08.534063+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: audit 2026-03-09T21:11:08.534063+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: audit 2026-03-09T21:11:08.541413+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: audit 2026-03-09T21:11:08.541413+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: audit 2026-03-09T21:11:08.547148+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: audit 2026-03-09T21:11:08.547148+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: audit 2026-03-09T21:11:08.554227+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: audit 2026-03-09T21:11:08.554227+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: audit 2026-03-09T21:11:08.565896+0000 mon.a (mon.0) 249 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:11:09.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:08 vm07 bash[20771]: audit 2026-03-09T21:11:08.565896+0000 mon.a (mon.0) 249 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:11:09.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: audit 2026-03-09T21:11:07.690615+0000 mgr.y (mgr.14150) 54 : audit [DBG] from='client.24103 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm07=y;vm10=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:11:09.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: audit 2026-03-09T21:11:07.690615+0000 mgr.y (mgr.14150) 54 : audit [DBG] from='client.24103 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm07=y;vm10=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:11:09.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: cephadm 2026-03-09T21:11:07.691538+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Saving service mgr spec with placement vm07=y;vm10=x;count:2 2026-03-09T21:11:09.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: cephadm 2026-03-09T21:11:07.691538+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Saving service mgr spec with placement vm07=y;vm10=x;count:2 2026-03-09T21:11:09.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: audit 2026-03-09T21:11:07.696821+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: audit 2026-03-09T21:11:07.696821+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: audit 2026-03-09T21:11:07.697418+0000 mon.a (mon.0) 237 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:11:09.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: audit 2026-03-09T21:11:07.697418+0000 mon.a (mon.0) 237 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:11:09.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: audit 2026-03-09T21:11:07.698482+0000 mon.a (mon.0) 238 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:09.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: audit 2026-03-09T21:11:07.698482+0000 mon.a (mon.0) 238 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:09.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: audit 2026-03-09T21:11:07.698840+0000 mon.a (mon.0) 239 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:11:09.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: audit 2026-03-09T21:11:07.698840+0000 mon.a (mon.0) 239 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:11:09.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: audit 2026-03-09T21:11:07.704301+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: audit 2026-03-09T21:11:07.704301+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: audit 2026-03-09T21:11:07.705561+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T21:11:09.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: audit 2026-03-09T21:11:07.705561+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T21:11:09.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: audit 2026-03-09T21:11:07.707922+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T21:11:09.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: audit 2026-03-09T21:11:07.707922+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T21:11:09.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: audit 2026-03-09T21:11:07.711688+0000 mon.a (mon.0) 243 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T21:11:09.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: audit 2026-03-09T21:11:07.711688+0000 mon.a (mon.0) 243 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T21:11:09.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: audit 2026-03-09T21:11:07.712306+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:09.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: audit 2026-03-09T21:11:07.712306+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:09.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: cephadm 2026-03-09T21:11:07.712804+0000 mgr.y (mgr.14150) 56 : cephadm [INF] Deploying daemon mgr.x on vm10 2026-03-09T21:11:09.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: cephadm 2026-03-09T21:11:07.712804+0000 mgr.y (mgr.14150) 56 : cephadm [INF] Deploying daemon mgr.x on vm10 2026-03-09T21:11:09.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: audit 2026-03-09T21:11:08.534063+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: audit 2026-03-09T21:11:08.534063+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: audit 2026-03-09T21:11:08.541413+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: audit 2026-03-09T21:11:08.541413+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: audit 2026-03-09T21:11:08.547148+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: audit 2026-03-09T21:11:08.547148+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: audit 2026-03-09T21:11:08.554227+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: audit 2026-03-09T21:11:08.554227+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:09.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: audit 2026-03-09T21:11:08.565896+0000 mon.a (mon.0) 249 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:11:09.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:08 vm10 bash[23387]: audit 2026-03-09T21:11:08.565896+0000 mon.a (mon.0) 249 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:11:09.193 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:08 vm10 bash[24097]: debug 2026-03-09T21:11:08.776+0000 7fcf88aa2140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T21:11:09.193 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:08 vm10 bash[24097]: debug 2026-03-09T21:11:08.904+0000 7fcf88aa2140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T21:11:09.658 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:09 vm10 bash[24097]: debug 2026-03-09T21:11:09.204+0000 7fcf88aa2140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T21:11:09.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:09 vm10 bash[23387]: cluster 2026-03-09T21:11:08.582121+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:09.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:09 vm10 bash[23387]: cluster 2026-03-09T21:11:08.582121+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:09.942 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:09 vm10 bash[24097]: debug 2026-03-09T21:11:09.656+0000 7fcf88aa2140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T21:11:09.942 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:09 vm10 bash[24097]: debug 2026-03-09T21:11:09.740+0000 7fcf88aa2140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T21:11:09.942 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:09 vm10 bash[24097]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T21:11:09.942 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:09 vm10 bash[24097]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T21:11:09.942 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:09 vm10 bash[24097]: from numpy import show_config as show_numpy_config 2026-03-09T21:11:09.942 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:09 vm10 bash[24097]: debug 2026-03-09T21:11:09.868+0000 7fcf88aa2140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T21:11:10.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:09 vm07 bash[20771]: cluster 2026-03-09T21:11:08.582121+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:10.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:09 vm07 bash[20771]: cluster 2026-03-09T21:11:08.582121+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:10.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:09 vm07 bash[28052]: cluster 2026-03-09T21:11:08.582121+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:10.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:09 vm07 bash[28052]: cluster 2026-03-09T21:11:08.582121+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:10.442 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:10 vm10 bash[24097]: debug 2026-03-09T21:11:10.000+0000 7fcf88aa2140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T21:11:10.442 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:10 vm10 bash[24097]: debug 2026-03-09T21:11:10.036+0000 7fcf88aa2140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T21:11:10.442 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:10 vm10 bash[24097]: debug 2026-03-09T21:11:10.068+0000 7fcf88aa2140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T21:11:10.442 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:10 vm10 bash[24097]: debug 2026-03-09T21:11:10.108+0000 7fcf88aa2140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T21:11:10.442 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:10 vm10 bash[24097]: debug 2026-03-09T21:11:10.156+0000 7fcf88aa2140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T21:11:10.854 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:10 vm10 bash[24097]: debug 2026-03-09T21:11:10.584+0000 7fcf88aa2140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T21:11:10.854 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:10 vm10 bash[24097]: debug 2026-03-09T21:11:10.620+0000 7fcf88aa2140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T21:11:10.854 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:10 vm10 bash[24097]: debug 2026-03-09T21:11:10.660+0000 7fcf88aa2140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T21:11:10.854 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:10 vm10 bash[24097]: debug 2026-03-09T21:11:10.808+0000 7fcf88aa2140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T21:11:11.161 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:10 vm10 bash[24097]: debug 2026-03-09T21:11:10.852+0000 7fcf88aa2140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T21:11:11.161 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:10 vm10 bash[24097]: debug 2026-03-09T21:11:10.892+0000 7fcf88aa2140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T21:11:11.161 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:11 vm10 bash[24097]: debug 2026-03-09T21:11:11.000+0000 7fcf88aa2140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T21:11:11.417 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:11 vm10 bash[24097]: debug 2026-03-09T21:11:11.156+0000 7fcf88aa2140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T21:11:11.417 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:11 vm10 bash[24097]: debug 2026-03-09T21:11:11.324+0000 7fcf88aa2140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T21:11:11.417 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:11 vm10 bash[24097]: debug 2026-03-09T21:11:11.364+0000 7fcf88aa2140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T21:11:11.417 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:11 vm10 bash[24097]: debug 2026-03-09T21:11:11.412+0000 7fcf88aa2140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T21:11:11.692 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:11 vm10 bash[24097]: debug 2026-03-09T21:11:11.548+0000 7fcf88aa2140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T21:11:12.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:11 vm10 bash[23387]: cluster 2026-03-09T21:11:10.582322+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:12.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:11 vm10 bash[23387]: cluster 2026-03-09T21:11:10.582322+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:12.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:11 vm10 bash[23387]: cluster 2026-03-09T21:11:11.783511+0000 mon.a (mon.0) 250 : cluster [DBG] Standby manager daemon x started 2026-03-09T21:11:12.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:11 vm10 bash[23387]: cluster 2026-03-09T21:11:11.783511+0000 mon.a (mon.0) 250 : cluster [DBG] Standby manager daemon x started 2026-03-09T21:11:12.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:11 vm10 bash[23387]: audit 2026-03-09T21:11:11.785284+0000 mon.b (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.110:0/3977206017' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T21:11:12.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:11 vm10 bash[23387]: audit 2026-03-09T21:11:11.785284+0000 mon.b (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.110:0/3977206017' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T21:11:12.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:11 vm10 bash[23387]: audit 2026-03-09T21:11:11.785535+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.110:0/3977206017' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T21:11:12.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:11 vm10 bash[23387]: audit 2026-03-09T21:11:11.785535+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.110:0/3977206017' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T21:11:12.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:11 vm10 bash[23387]: audit 2026-03-09T21:11:11.786125+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.110:0/3977206017' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T21:11:12.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:11 vm10 bash[23387]: audit 2026-03-09T21:11:11.786125+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.110:0/3977206017' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T21:11:12.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:11 vm10 bash[23387]: audit 2026-03-09T21:11:11.786286+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.110:0/3977206017' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T21:11:12.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:11 vm10 bash[23387]: audit 2026-03-09T21:11:11.786286+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.110:0/3977206017' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T21:11:12.193 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:11:11 vm10 bash[24097]: debug 2026-03-09T21:11:11.776+0000 7fcf88aa2140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T21:11:12.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:11 vm07 bash[20771]: cluster 2026-03-09T21:11:10.582322+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:12.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:11 vm07 bash[20771]: cluster 2026-03-09T21:11:10.582322+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:12.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:11 vm07 bash[20771]: cluster 2026-03-09T21:11:11.783511+0000 mon.a (mon.0) 250 : cluster [DBG] Standby manager daemon x started 2026-03-09T21:11:12.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:11 vm07 bash[20771]: cluster 2026-03-09T21:11:11.783511+0000 mon.a (mon.0) 250 : cluster [DBG] Standby manager daemon x started 2026-03-09T21:11:12.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:11 vm07 bash[20771]: audit 2026-03-09T21:11:11.785284+0000 mon.b (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.110:0/3977206017' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T21:11:12.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:11 vm07 bash[20771]: audit 2026-03-09T21:11:11.785284+0000 mon.b (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.110:0/3977206017' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T21:11:12.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:11 vm07 bash[20771]: audit 2026-03-09T21:11:11.785535+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.110:0/3977206017' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T21:11:12.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:11 vm07 bash[20771]: audit 2026-03-09T21:11:11.785535+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.110:0/3977206017' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T21:11:12.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:11 vm07 bash[20771]: audit 2026-03-09T21:11:11.786125+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.110:0/3977206017' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T21:11:12.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:11 vm07 bash[20771]: audit 2026-03-09T21:11:11.786125+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.110:0/3977206017' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T21:11:12.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:11 vm07 bash[20771]: audit 2026-03-09T21:11:11.786286+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.110:0/3977206017' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T21:11:12.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:11 vm07 bash[20771]: audit 2026-03-09T21:11:11.786286+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.110:0/3977206017' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T21:11:12.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:11 vm07 bash[28052]: cluster 2026-03-09T21:11:10.582322+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:12.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:11 vm07 bash[28052]: cluster 2026-03-09T21:11:10.582322+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:12.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:11 vm07 bash[28052]: cluster 2026-03-09T21:11:11.783511+0000 mon.a (mon.0) 250 : cluster [DBG] Standby manager daemon x started 2026-03-09T21:11:12.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:11 vm07 bash[28052]: cluster 2026-03-09T21:11:11.783511+0000 mon.a (mon.0) 250 : cluster [DBG] Standby manager daemon x started 2026-03-09T21:11:12.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:11 vm07 bash[28052]: audit 2026-03-09T21:11:11.785284+0000 mon.b (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.110:0/3977206017' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T21:11:12.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:11 vm07 bash[28052]: audit 2026-03-09T21:11:11.785284+0000 mon.b (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.110:0/3977206017' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T21:11:12.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:11 vm07 bash[28052]: audit 2026-03-09T21:11:11.785535+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.110:0/3977206017' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T21:11:12.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:11 vm07 bash[28052]: audit 2026-03-09T21:11:11.785535+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.110:0/3977206017' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T21:11:12.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:11 vm07 bash[28052]: audit 2026-03-09T21:11:11.786125+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.110:0/3977206017' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T21:11:12.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:11 vm07 bash[28052]: audit 2026-03-09T21:11:11.786125+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.110:0/3977206017' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T21:11:12.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:11 vm07 bash[28052]: audit 2026-03-09T21:11:11.786286+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.110:0/3977206017' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T21:11:12.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:11 vm07 bash[28052]: audit 2026-03-09T21:11:11.786286+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.110:0/3977206017' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T21:11:12.632 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:11:13.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:12 vm07 bash[28052]: cluster 2026-03-09T21:11:11.875814+0000 mon.a (mon.0) 251 : cluster [DBG] mgrmap e14: y(active, since 57s), standbys: x 2026-03-09T21:11:13.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:12 vm07 bash[28052]: cluster 2026-03-09T21:11:11.875814+0000 mon.a (mon.0) 251 : cluster [DBG] mgrmap e14: y(active, since 57s), standbys: x 2026-03-09T21:11:13.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:12 vm07 bash[28052]: audit 2026-03-09T21:11:11.876913+0000 mon.a (mon.0) 252 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T21:11:13.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:12 vm07 bash[28052]: audit 2026-03-09T21:11:11.876913+0000 mon.a (mon.0) 252 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T21:11:13.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:12 vm07 bash[28052]: audit 2026-03-09T21:11:12.629377+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:13.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:12 vm07 bash[28052]: audit 2026-03-09T21:11:12.629377+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:13.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:12 vm07 bash[20771]: cluster 2026-03-09T21:11:11.875814+0000 mon.a (mon.0) 251 : cluster [DBG] mgrmap e14: y(active, since 57s), standbys: x 2026-03-09T21:11:13.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:12 vm07 bash[20771]: cluster 2026-03-09T21:11:11.875814+0000 mon.a (mon.0) 251 : cluster [DBG] mgrmap e14: y(active, since 57s), standbys: x 2026-03-09T21:11:13.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:12 vm07 bash[20771]: audit 2026-03-09T21:11:11.876913+0000 mon.a (mon.0) 252 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T21:11:13.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:12 vm07 bash[20771]: audit 2026-03-09T21:11:11.876913+0000 mon.a (mon.0) 252 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T21:11:13.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:12 vm07 bash[20771]: audit 2026-03-09T21:11:12.629377+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:13.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:12 vm07 bash[20771]: audit 2026-03-09T21:11:12.629377+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:13.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:12 vm10 bash[23387]: cluster 2026-03-09T21:11:11.875814+0000 mon.a (mon.0) 251 : cluster [DBG] mgrmap e14: y(active, since 57s), standbys: x 2026-03-09T21:11:13.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:12 vm10 bash[23387]: cluster 2026-03-09T21:11:11.875814+0000 mon.a (mon.0) 251 : cluster [DBG] mgrmap e14: y(active, since 57s), standbys: x 2026-03-09T21:11:13.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:12 vm10 bash[23387]: audit 2026-03-09T21:11:11.876913+0000 mon.a (mon.0) 252 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T21:11:13.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:12 vm10 bash[23387]: audit 2026-03-09T21:11:11.876913+0000 mon.a (mon.0) 252 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T21:11:13.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:12 vm10 bash[23387]: audit 2026-03-09T21:11:12.629377+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:13.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:12 vm10 bash[23387]: audit 2026-03-09T21:11:12.629377+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:13.453 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:11:13.468 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph orch daemon add osd vm07:/dev/vde 2026-03-09T21:11:14.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:13 vm10 bash[23387]: cluster 2026-03-09T21:11:12.582539+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:14.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:13 vm10 bash[23387]: cluster 2026-03-09T21:11:12.582539+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:14.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:13 vm10 bash[23387]: audit 2026-03-09T21:11:13.542012+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:14.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:13 vm10 bash[23387]: audit 2026-03-09T21:11:13.542012+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:14.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:13 vm10 bash[23387]: audit 2026-03-09T21:11:13.563613+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:14.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:13 vm10 bash[23387]: audit 2026-03-09T21:11:13.563613+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:14.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:13 vm10 bash[23387]: audit 2026-03-09T21:11:13.565832+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:14.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:13 vm10 bash[23387]: audit 2026-03-09T21:11:13.565832+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:14.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:13 vm10 bash[23387]: audit 2026-03-09T21:11:13.570170+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:11:14.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:13 vm10 bash[23387]: audit 2026-03-09T21:11:13.570170+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:11:14.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:13 vm10 bash[23387]: audit 2026-03-09T21:11:13.749157+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:14.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:13 vm10 bash[23387]: audit 2026-03-09T21:11:13.749157+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:14.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:13 vm10 bash[23387]: audit 2026-03-09T21:11:13.760477+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T21:11:14.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:13 vm10 bash[23387]: audit 2026-03-09T21:11:13.760477+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T21:11:14.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:13 vm10 bash[23387]: audit 2026-03-09T21:11:13.761056+0000 mon.a (mon.0) 260 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T21:11:14.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:13 vm10 bash[23387]: audit 2026-03-09T21:11:13.761056+0000 mon.a (mon.0) 260 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T21:11:14.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:13 vm10 bash[23387]: audit 2026-03-09T21:11:13.761474+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:14.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:13 vm10 bash[23387]: audit 2026-03-09T21:11:13.761474+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:13 vm07 bash[20771]: cluster 2026-03-09T21:11:12.582539+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:13 vm07 bash[20771]: cluster 2026-03-09T21:11:12.582539+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:13 vm07 bash[20771]: audit 2026-03-09T21:11:13.542012+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:13 vm07 bash[20771]: audit 2026-03-09T21:11:13.542012+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:13 vm07 bash[20771]: audit 2026-03-09T21:11:13.563613+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:13 vm07 bash[20771]: audit 2026-03-09T21:11:13.563613+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:13 vm07 bash[20771]: audit 2026-03-09T21:11:13.565832+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:13 vm07 bash[20771]: audit 2026-03-09T21:11:13.565832+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:13 vm07 bash[20771]: audit 2026-03-09T21:11:13.570170+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:13 vm07 bash[20771]: audit 2026-03-09T21:11:13.570170+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:13 vm07 bash[20771]: audit 2026-03-09T21:11:13.749157+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:13 vm07 bash[20771]: audit 2026-03-09T21:11:13.749157+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:13 vm07 bash[20771]: audit 2026-03-09T21:11:13.760477+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:13 vm07 bash[20771]: audit 2026-03-09T21:11:13.760477+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:13 vm07 bash[20771]: audit 2026-03-09T21:11:13.761056+0000 mon.a (mon.0) 260 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:13 vm07 bash[20771]: audit 2026-03-09T21:11:13.761056+0000 mon.a (mon.0) 260 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:13 vm07 bash[20771]: audit 2026-03-09T21:11:13.761474+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:13 vm07 bash[20771]: audit 2026-03-09T21:11:13.761474+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:13 vm07 bash[28052]: cluster 2026-03-09T21:11:12.582539+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:13 vm07 bash[28052]: cluster 2026-03-09T21:11:12.582539+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:13 vm07 bash[28052]: audit 2026-03-09T21:11:13.542012+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:13 vm07 bash[28052]: audit 2026-03-09T21:11:13.542012+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:13 vm07 bash[28052]: audit 2026-03-09T21:11:13.563613+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:13 vm07 bash[28052]: audit 2026-03-09T21:11:13.563613+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:13 vm07 bash[28052]: audit 2026-03-09T21:11:13.565832+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:13 vm07 bash[28052]: audit 2026-03-09T21:11:13.565832+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:13 vm07 bash[28052]: audit 2026-03-09T21:11:13.570170+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:13 vm07 bash[28052]: audit 2026-03-09T21:11:13.570170+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:13 vm07 bash[28052]: audit 2026-03-09T21:11:13.749157+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:13 vm07 bash[28052]: audit 2026-03-09T21:11:13.749157+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:13 vm07 bash[28052]: audit 2026-03-09T21:11:13.760477+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T21:11:14.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:13 vm07 bash[28052]: audit 2026-03-09T21:11:13.760477+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T21:11:14.367 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:13 vm07 bash[28052]: audit 2026-03-09T21:11:13.761056+0000 mon.a (mon.0) 260 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T21:11:14.367 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:13 vm07 bash[28052]: audit 2026-03-09T21:11:13.761056+0000 mon.a (mon.0) 260 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T21:11:14.367 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:13 vm07 bash[28052]: audit 2026-03-09T21:11:13.761474+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:14.367 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:13 vm07 bash[28052]: audit 2026-03-09T21:11:13.761474+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:15.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:15 vm07 bash[20771]: cephadm 2026-03-09T21:11:13.760126+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-09T21:11:15.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:15 vm07 bash[20771]: cephadm 2026-03-09T21:11:13.760126+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-09T21:11:15.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:15 vm07 bash[20771]: cephadm 2026-03-09T21:11:13.762050+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm07 2026-03-09T21:11:15.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:15 vm07 bash[20771]: cephadm 2026-03-09T21:11:13.762050+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm07 2026-03-09T21:11:15.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:15 vm07 bash[20771]: audit 2026-03-09T21:11:14.308314+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:15.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:15 vm07 bash[20771]: audit 2026-03-09T21:11:14.308314+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:15.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:15 vm07 bash[20771]: audit 2026-03-09T21:11:14.313226+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:15.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:15 vm07 bash[20771]: audit 2026-03-09T21:11:14.313226+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:15.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:15 vm07 bash[20771]: audit 2026-03-09T21:11:14.314395+0000 mon.a (mon.0) 264 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:11:15.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:15 vm07 bash[20771]: audit 2026-03-09T21:11:14.314395+0000 mon.a (mon.0) 264 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:11:15.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:15 vm07 bash[20771]: audit 2026-03-09T21:11:14.315471+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:15.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:15 vm07 bash[20771]: audit 2026-03-09T21:11:14.315471+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:15.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:15 vm07 bash[20771]: audit 2026-03-09T21:11:14.315914+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:11:15.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:15 vm07 bash[20771]: audit 2026-03-09T21:11:14.315914+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:11:15.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:15 vm07 bash[20771]: audit 2026-03-09T21:11:14.320034+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:15.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:15 vm07 bash[20771]: audit 2026-03-09T21:11:14.320034+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:15.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:15 vm07 bash[28052]: cephadm 2026-03-09T21:11:13.760126+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-09T21:11:15.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:15 vm07 bash[28052]: cephadm 2026-03-09T21:11:13.760126+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-09T21:11:15.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:15 vm07 bash[28052]: cephadm 2026-03-09T21:11:13.762050+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm07 2026-03-09T21:11:15.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:15 vm07 bash[28052]: cephadm 2026-03-09T21:11:13.762050+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm07 2026-03-09T21:11:15.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:15 vm07 bash[28052]: audit 2026-03-09T21:11:14.308314+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:15.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:15 vm07 bash[28052]: audit 2026-03-09T21:11:14.308314+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:15.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:15 vm07 bash[28052]: audit 2026-03-09T21:11:14.313226+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:15.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:15 vm07 bash[28052]: audit 2026-03-09T21:11:14.313226+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:15.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:15 vm07 bash[28052]: audit 2026-03-09T21:11:14.314395+0000 mon.a (mon.0) 264 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:11:15.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:15 vm07 bash[28052]: audit 2026-03-09T21:11:14.314395+0000 mon.a (mon.0) 264 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:11:15.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:15 vm07 bash[28052]: audit 2026-03-09T21:11:14.315471+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:15.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:15 vm07 bash[28052]: audit 2026-03-09T21:11:14.315471+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:15.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:15 vm07 bash[28052]: audit 2026-03-09T21:11:14.315914+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:11:15.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:15 vm07 bash[28052]: audit 2026-03-09T21:11:14.315914+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:11:15.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:15 vm07 bash[28052]: audit 2026-03-09T21:11:14.320034+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:15.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:15 vm07 bash[28052]: audit 2026-03-09T21:11:14.320034+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:15 vm10 bash[23387]: cephadm 2026-03-09T21:11:13.760126+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-09T21:11:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:15 vm10 bash[23387]: cephadm 2026-03-09T21:11:13.760126+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-09T21:11:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:15 vm10 bash[23387]: cephadm 2026-03-09T21:11:13.762050+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm07 2026-03-09T21:11:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:15 vm10 bash[23387]: cephadm 2026-03-09T21:11:13.762050+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm07 2026-03-09T21:11:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:15 vm10 bash[23387]: audit 2026-03-09T21:11:14.308314+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:15 vm10 bash[23387]: audit 2026-03-09T21:11:14.308314+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:15 vm10 bash[23387]: audit 2026-03-09T21:11:14.313226+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:15 vm10 bash[23387]: audit 2026-03-09T21:11:14.313226+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:15 vm10 bash[23387]: audit 2026-03-09T21:11:14.314395+0000 mon.a (mon.0) 264 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:11:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:15 vm10 bash[23387]: audit 2026-03-09T21:11:14.314395+0000 mon.a (mon.0) 264 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:11:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:15 vm10 bash[23387]: audit 2026-03-09T21:11:14.315471+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:15 vm10 bash[23387]: audit 2026-03-09T21:11:14.315471+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:15 vm10 bash[23387]: audit 2026-03-09T21:11:14.315914+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:11:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:15 vm10 bash[23387]: audit 2026-03-09T21:11:14.315914+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:11:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:15 vm10 bash[23387]: audit 2026-03-09T21:11:14.320034+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:15 vm10 bash[23387]: audit 2026-03-09T21:11:14.320034+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:16 vm10 bash[23387]: cluster 2026-03-09T21:11:14.582751+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:16 vm10 bash[23387]: cluster 2026-03-09T21:11:14.582751+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:16.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:16 vm07 bash[20771]: cluster 2026-03-09T21:11:14.582751+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:16.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:16 vm07 bash[20771]: cluster 2026-03-09T21:11:14.582751+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:16.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:16 vm07 bash[28052]: cluster 2026-03-09T21:11:14.582751+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:16.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:16 vm07 bash[28052]: cluster 2026-03-09T21:11:14.582751+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:18.103 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:11:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:18 vm10 bash[23387]: cluster 2026-03-09T21:11:16.583054+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:18 vm10 bash[23387]: cluster 2026-03-09T21:11:16.583054+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:18 vm10 bash[23387]: audit 2026-03-09T21:11:18.365019+0000 mon.a (mon.0) 268 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:11:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:18 vm10 bash[23387]: audit 2026-03-09T21:11:18.365019+0000 mon.a (mon.0) 268 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:11:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:18 vm10 bash[23387]: audit 2026-03-09T21:11:18.366607+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:11:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:18 vm10 bash[23387]: audit 2026-03-09T21:11:18.366607+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:11:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:18 vm10 bash[23387]: audit 2026-03-09T21:11:18.366962+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:18 vm10 bash[23387]: audit 2026-03-09T21:11:18.366962+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:18.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:18 vm07 bash[20771]: cluster 2026-03-09T21:11:16.583054+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:18.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:18 vm07 bash[20771]: cluster 2026-03-09T21:11:16.583054+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:18.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:18 vm07 bash[20771]: audit 2026-03-09T21:11:18.365019+0000 mon.a (mon.0) 268 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:11:18.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:18 vm07 bash[20771]: audit 2026-03-09T21:11:18.365019+0000 mon.a (mon.0) 268 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:11:18.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:18 vm07 bash[20771]: audit 2026-03-09T21:11:18.366607+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:11:18.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:18 vm07 bash[20771]: audit 2026-03-09T21:11:18.366607+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:11:18.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:18 vm07 bash[20771]: audit 2026-03-09T21:11:18.366962+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:18.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:18 vm07 bash[20771]: audit 2026-03-09T21:11:18.366962+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:18.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:18 vm07 bash[28052]: cluster 2026-03-09T21:11:16.583054+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:18.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:18 vm07 bash[28052]: cluster 2026-03-09T21:11:16.583054+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:18.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:18 vm07 bash[28052]: audit 2026-03-09T21:11:18.365019+0000 mon.a (mon.0) 268 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:11:18.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:18 vm07 bash[28052]: audit 2026-03-09T21:11:18.365019+0000 mon.a (mon.0) 268 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:11:18.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:18 vm07 bash[28052]: audit 2026-03-09T21:11:18.366607+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:11:18.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:18 vm07 bash[28052]: audit 2026-03-09T21:11:18.366607+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:11:18.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:18 vm07 bash[28052]: audit 2026-03-09T21:11:18.366962+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:18.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:18 vm07 bash[28052]: audit 2026-03-09T21:11:18.366962+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:19.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:19 vm10 bash[23387]: audit 2026-03-09T21:11:18.363607+0000 mgr.y (mgr.14150) 64 : audit [DBG] from='client.14193 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:11:19.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:19 vm10 bash[23387]: audit 2026-03-09T21:11:18.363607+0000 mgr.y (mgr.14150) 64 : audit [DBG] from='client.14193 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:11:19.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:19 vm07 bash[20771]: audit 2026-03-09T21:11:18.363607+0000 mgr.y (mgr.14150) 64 : audit [DBG] from='client.14193 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:11:19.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:19 vm07 bash[20771]: audit 2026-03-09T21:11:18.363607+0000 mgr.y (mgr.14150) 64 : audit [DBG] from='client.14193 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:11:19.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:19 vm07 bash[28052]: audit 2026-03-09T21:11:18.363607+0000 mgr.y (mgr.14150) 64 : audit [DBG] from='client.14193 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:11:19.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:19 vm07 bash[28052]: audit 2026-03-09T21:11:18.363607+0000 mgr.y (mgr.14150) 64 : audit [DBG] from='client.14193 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:11:20.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:20 vm10 bash[23387]: cluster 2026-03-09T21:11:18.583286+0000 mgr.y (mgr.14150) 65 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:20.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:20 vm10 bash[23387]: cluster 2026-03-09T21:11:18.583286+0000 mgr.y (mgr.14150) 65 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:20.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:20 vm07 bash[20771]: cluster 2026-03-09T21:11:18.583286+0000 mgr.y (mgr.14150) 65 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:20.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:20 vm07 bash[20771]: cluster 2026-03-09T21:11:18.583286+0000 mgr.y (mgr.14150) 65 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:20.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:20 vm07 bash[28052]: cluster 2026-03-09T21:11:18.583286+0000 mgr.y (mgr.14150) 65 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:20.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:20 vm07 bash[28052]: cluster 2026-03-09T21:11:18.583286+0000 mgr.y (mgr.14150) 65 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:22.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:22 vm10 bash[23387]: cluster 2026-03-09T21:11:20.583450+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:22.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:22 vm10 bash[23387]: cluster 2026-03-09T21:11:20.583450+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:22.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:22 vm07 bash[20771]: cluster 2026-03-09T21:11:20.583450+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:22.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:22 vm07 bash[20771]: cluster 2026-03-09T21:11:20.583450+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:22.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:22 vm07 bash[28052]: cluster 2026-03-09T21:11:20.583450+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:22.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:22 vm07 bash[28052]: cluster 2026-03-09T21:11:20.583450+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:24 vm10 bash[23387]: cluster 2026-03-09T21:11:22.583598+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:24 vm10 bash[23387]: cluster 2026-03-09T21:11:22.583598+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:24 vm10 bash[23387]: audit 2026-03-09T21:11:23.718461+0000 mon.a (mon.0) 271 : audit [INF] from='client.? 192.168.123.107:0/920136217' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5ef293c2-89b5-4f27-a447-e0750ac5c165"}]: dispatch 2026-03-09T21:11:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:24 vm10 bash[23387]: audit 2026-03-09T21:11:23.718461+0000 mon.a (mon.0) 271 : audit [INF] from='client.? 192.168.123.107:0/920136217' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5ef293c2-89b5-4f27-a447-e0750ac5c165"}]: dispatch 2026-03-09T21:11:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:24 vm10 bash[23387]: audit 2026-03-09T21:11:23.721207+0000 mon.a (mon.0) 272 : audit [INF] from='client.? 192.168.123.107:0/920136217' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5ef293c2-89b5-4f27-a447-e0750ac5c165"}]': finished 2026-03-09T21:11:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:24 vm10 bash[23387]: audit 2026-03-09T21:11:23.721207+0000 mon.a (mon.0) 272 : audit [INF] from='client.? 192.168.123.107:0/920136217' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5ef293c2-89b5-4f27-a447-e0750ac5c165"}]': finished 2026-03-09T21:11:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:24 vm10 bash[23387]: cluster 2026-03-09T21:11:23.724304+0000 mon.a (mon.0) 273 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T21:11:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:24 vm10 bash[23387]: cluster 2026-03-09T21:11:23.724304+0000 mon.a (mon.0) 273 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T21:11:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:24 vm10 bash[23387]: audit 2026-03-09T21:11:23.724694+0000 mon.a (mon.0) 274 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:24 vm10 bash[23387]: audit 2026-03-09T21:11:23.724694+0000 mon.a (mon.0) 274 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:24.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:24 vm07 bash[20771]: cluster 2026-03-09T21:11:22.583598+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:24.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:24 vm07 bash[20771]: cluster 2026-03-09T21:11:22.583598+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:24.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:24 vm07 bash[20771]: audit 2026-03-09T21:11:23.718461+0000 mon.a (mon.0) 271 : audit [INF] from='client.? 192.168.123.107:0/920136217' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5ef293c2-89b5-4f27-a447-e0750ac5c165"}]: dispatch 2026-03-09T21:11:24.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:24 vm07 bash[20771]: audit 2026-03-09T21:11:23.718461+0000 mon.a (mon.0) 271 : audit [INF] from='client.? 192.168.123.107:0/920136217' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5ef293c2-89b5-4f27-a447-e0750ac5c165"}]: dispatch 2026-03-09T21:11:24.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:24 vm07 bash[20771]: audit 2026-03-09T21:11:23.721207+0000 mon.a (mon.0) 272 : audit [INF] from='client.? 192.168.123.107:0/920136217' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5ef293c2-89b5-4f27-a447-e0750ac5c165"}]': finished 2026-03-09T21:11:24.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:24 vm07 bash[20771]: audit 2026-03-09T21:11:23.721207+0000 mon.a (mon.0) 272 : audit [INF] from='client.? 192.168.123.107:0/920136217' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5ef293c2-89b5-4f27-a447-e0750ac5c165"}]': finished 2026-03-09T21:11:24.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:24 vm07 bash[20771]: cluster 2026-03-09T21:11:23.724304+0000 mon.a (mon.0) 273 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T21:11:24.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:24 vm07 bash[20771]: cluster 2026-03-09T21:11:23.724304+0000 mon.a (mon.0) 273 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T21:11:24.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:24 vm07 bash[20771]: audit 2026-03-09T21:11:23.724694+0000 mon.a (mon.0) 274 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:24.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:24 vm07 bash[20771]: audit 2026-03-09T21:11:23.724694+0000 mon.a (mon.0) 274 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:24.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:24 vm07 bash[28052]: cluster 2026-03-09T21:11:22.583598+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:24.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:24 vm07 bash[28052]: cluster 2026-03-09T21:11:22.583598+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:24.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:24 vm07 bash[28052]: audit 2026-03-09T21:11:23.718461+0000 mon.a (mon.0) 271 : audit [INF] from='client.? 192.168.123.107:0/920136217' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5ef293c2-89b5-4f27-a447-e0750ac5c165"}]: dispatch 2026-03-09T21:11:24.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:24 vm07 bash[28052]: audit 2026-03-09T21:11:23.718461+0000 mon.a (mon.0) 271 : audit [INF] from='client.? 192.168.123.107:0/920136217' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5ef293c2-89b5-4f27-a447-e0750ac5c165"}]: dispatch 2026-03-09T21:11:24.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:24 vm07 bash[28052]: audit 2026-03-09T21:11:23.721207+0000 mon.a (mon.0) 272 : audit [INF] from='client.? 192.168.123.107:0/920136217' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5ef293c2-89b5-4f27-a447-e0750ac5c165"}]': finished 2026-03-09T21:11:24.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:24 vm07 bash[28052]: audit 2026-03-09T21:11:23.721207+0000 mon.a (mon.0) 272 : audit [INF] from='client.? 192.168.123.107:0/920136217' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5ef293c2-89b5-4f27-a447-e0750ac5c165"}]': finished 2026-03-09T21:11:24.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:24 vm07 bash[28052]: cluster 2026-03-09T21:11:23.724304+0000 mon.a (mon.0) 273 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T21:11:24.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:24 vm07 bash[28052]: cluster 2026-03-09T21:11:23.724304+0000 mon.a (mon.0) 273 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T21:11:24.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:24 vm07 bash[28052]: audit 2026-03-09T21:11:23.724694+0000 mon.a (mon.0) 274 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:24.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:24 vm07 bash[28052]: audit 2026-03-09T21:11:23.724694+0000 mon.a (mon.0) 274 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:25.691 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:25 vm10 bash[23387]: audit 2026-03-09T21:11:24.414729+0000 mon.b (mon.1) 7 : audit [DBG] from='client.? 192.168.123.107:0/2870137541' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:11:25.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:25 vm10 bash[23387]: audit 2026-03-09T21:11:24.414729+0000 mon.b (mon.1) 7 : audit [DBG] from='client.? 192.168.123.107:0/2870137541' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:11:25.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:25 vm07 bash[20771]: audit 2026-03-09T21:11:24.414729+0000 mon.b (mon.1) 7 : audit [DBG] from='client.? 192.168.123.107:0/2870137541' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:11:25.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:25 vm07 bash[20771]: audit 2026-03-09T21:11:24.414729+0000 mon.b (mon.1) 7 : audit [DBG] from='client.? 192.168.123.107:0/2870137541' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:11:25.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:25 vm07 bash[28052]: audit 2026-03-09T21:11:24.414729+0000 mon.b (mon.1) 7 : audit [DBG] from='client.? 192.168.123.107:0/2870137541' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:11:25.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:25 vm07 bash[28052]: audit 2026-03-09T21:11:24.414729+0000 mon.b (mon.1) 7 : audit [DBG] from='client.? 192.168.123.107:0/2870137541' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:11:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:26 vm10 bash[23387]: cluster 2026-03-09T21:11:24.583765+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:26 vm10 bash[23387]: cluster 2026-03-09T21:11:24.583765+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:26 vm07 bash[20771]: cluster 2026-03-09T21:11:24.583765+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:26.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:26 vm07 bash[20771]: cluster 2026-03-09T21:11:24.583765+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:26.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:26 vm07 bash[28052]: cluster 2026-03-09T21:11:24.583765+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:26.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:26 vm07 bash[28052]: cluster 2026-03-09T21:11:24.583765+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:28.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:28 vm07 bash[20771]: cluster 2026-03-09T21:11:26.583956+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:28.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:28 vm07 bash[20771]: cluster 2026-03-09T21:11:26.583956+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:28.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:28 vm07 bash[28052]: cluster 2026-03-09T21:11:26.583956+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:28.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:28 vm07 bash[28052]: cluster 2026-03-09T21:11:26.583956+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:28.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:28 vm10 bash[23387]: cluster 2026-03-09T21:11:26.583956+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:28.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:28 vm10 bash[23387]: cluster 2026-03-09T21:11:26.583956+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:29.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:29 vm07 bash[20771]: cluster 2026-03-09T21:11:28.584139+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:29.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:29 vm07 bash[20771]: cluster 2026-03-09T21:11:28.584139+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:29.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:29 vm07 bash[28052]: cluster 2026-03-09T21:11:28.584139+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:29.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:29 vm07 bash[28052]: cluster 2026-03-09T21:11:28.584139+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:29.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:29 vm10 bash[23387]: cluster 2026-03-09T21:11:28.584139+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:29.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:29 vm10 bash[23387]: cluster 2026-03-09T21:11:28.584139+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:32.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:31 vm07 bash[20771]: cluster 2026-03-09T21:11:30.584349+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:32.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:31 vm07 bash[20771]: cluster 2026-03-09T21:11:30.584349+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:32.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:31 vm07 bash[28052]: cluster 2026-03-09T21:11:30.584349+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:32.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:31 vm07 bash[28052]: cluster 2026-03-09T21:11:30.584349+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:32.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:31 vm10 bash[23387]: cluster 2026-03-09T21:11:30.584349+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:32.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:31 vm10 bash[23387]: cluster 2026-03-09T21:11:30.584349+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:32.828 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:32 vm07 bash[20771]: audit 2026-03-09T21:11:32.578791+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T21:11:32.828 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:32 vm07 bash[20771]: audit 2026-03-09T21:11:32.578791+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T21:11:32.828 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:32 vm07 bash[20771]: audit 2026-03-09T21:11:32.579245+0000 mon.a (mon.0) 276 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:32.828 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:32 vm07 bash[20771]: audit 2026-03-09T21:11:32.579245+0000 mon.a (mon.0) 276 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:32.828 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:32 vm07 bash[28052]: audit 2026-03-09T21:11:32.578791+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T21:11:32.828 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:32 vm07 bash[28052]: audit 2026-03-09T21:11:32.578791+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T21:11:32.828 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:32 vm07 bash[28052]: audit 2026-03-09T21:11:32.579245+0000 mon.a (mon.0) 276 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:32.828 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:32 vm07 bash[28052]: audit 2026-03-09T21:11:32.579245+0000 mon.a (mon.0) 276 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:33.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:32 vm10 bash[23387]: audit 2026-03-09T21:11:32.578791+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T21:11:33.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:32 vm10 bash[23387]: audit 2026-03-09T21:11:32.578791+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T21:11:33.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:32 vm10 bash[23387]: audit 2026-03-09T21:11:32.579245+0000 mon.a (mon.0) 276 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:33.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:32 vm10 bash[23387]: audit 2026-03-09T21:11:32.579245+0000 mon.a (mon.0) 276 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:33.762 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:11:33 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:11:33.762 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:11:33 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:11:33.762 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:33 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:11:33.763 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:33 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:11:33.763 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:33 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:11:33.763 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:33 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:11:33.763 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:33 vm07 bash[20771]: cephadm 2026-03-09T21:11:32.579597+0000 mgr.y (mgr.14150) 72 : cephadm [INF] Deploying daemon osd.0 on vm07 2026-03-09T21:11:33.763 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:33 vm07 bash[20771]: cephadm 2026-03-09T21:11:32.579597+0000 mgr.y (mgr.14150) 72 : cephadm [INF] Deploying daemon osd.0 on vm07 2026-03-09T21:11:33.763 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:33 vm07 bash[20771]: cluster 2026-03-09T21:11:32.584508+0000 mgr.y (mgr.14150) 73 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:33.763 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:33 vm07 bash[20771]: cluster 2026-03-09T21:11:32.584508+0000 mgr.y (mgr.14150) 73 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:34.039 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:33 vm07 bash[28052]: cephadm 2026-03-09T21:11:32.579597+0000 mgr.y (mgr.14150) 72 : cephadm [INF] Deploying daemon osd.0 on vm07 2026-03-09T21:11:34.039 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:33 vm07 bash[28052]: cephadm 2026-03-09T21:11:32.579597+0000 mgr.y (mgr.14150) 72 : cephadm [INF] Deploying daemon osd.0 on vm07 2026-03-09T21:11:34.039 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:33 vm07 bash[28052]: cluster 2026-03-09T21:11:32.584508+0000 mgr.y (mgr.14150) 73 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:34.039 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:33 vm07 bash[28052]: cluster 2026-03-09T21:11:32.584508+0000 mgr.y (mgr.14150) 73 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:34.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:33 vm10 bash[23387]: cephadm 2026-03-09T21:11:32.579597+0000 mgr.y (mgr.14150) 72 : cephadm [INF] Deploying daemon osd.0 on vm07 2026-03-09T21:11:34.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:33 vm10 bash[23387]: cephadm 2026-03-09T21:11:32.579597+0000 mgr.y (mgr.14150) 72 : cephadm [INF] Deploying daemon osd.0 on vm07 2026-03-09T21:11:34.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:33 vm10 bash[23387]: cluster 2026-03-09T21:11:32.584508+0000 mgr.y (mgr.14150) 73 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:34.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:33 vm10 bash[23387]: cluster 2026-03-09T21:11:32.584508+0000 mgr.y (mgr.14150) 73 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:35.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:34 vm07 bash[20771]: audit 2026-03-09T21:11:33.795950+0000 mon.a (mon.0) 277 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:11:35.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:34 vm07 bash[20771]: audit 2026-03-09T21:11:33.795950+0000 mon.a (mon.0) 277 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:11:35.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:34 vm07 bash[20771]: audit 2026-03-09T21:11:33.800735+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:35.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:34 vm07 bash[20771]: audit 2026-03-09T21:11:33.800735+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:35.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:34 vm07 bash[20771]: audit 2026-03-09T21:11:33.806236+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:35.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:34 vm07 bash[20771]: audit 2026-03-09T21:11:33.806236+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:35.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:34 vm07 bash[28052]: audit 2026-03-09T21:11:33.795950+0000 mon.a (mon.0) 277 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:11:35.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:34 vm07 bash[28052]: audit 2026-03-09T21:11:33.795950+0000 mon.a (mon.0) 277 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:11:35.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:34 vm07 bash[28052]: audit 2026-03-09T21:11:33.800735+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:35.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:34 vm07 bash[28052]: audit 2026-03-09T21:11:33.800735+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:35.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:34 vm07 bash[28052]: audit 2026-03-09T21:11:33.806236+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:35.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:34 vm07 bash[28052]: audit 2026-03-09T21:11:33.806236+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:35.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:34 vm10 bash[23387]: audit 2026-03-09T21:11:33.795950+0000 mon.a (mon.0) 277 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:11:35.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:34 vm10 bash[23387]: audit 2026-03-09T21:11:33.795950+0000 mon.a (mon.0) 277 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:11:35.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:34 vm10 bash[23387]: audit 2026-03-09T21:11:33.800735+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:35.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:34 vm10 bash[23387]: audit 2026-03-09T21:11:33.800735+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:35.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:34 vm10 bash[23387]: audit 2026-03-09T21:11:33.806236+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:35.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:34 vm10 bash[23387]: audit 2026-03-09T21:11:33.806236+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:35 vm10 bash[23387]: cluster 2026-03-09T21:11:34.584701+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:35 vm10 bash[23387]: cluster 2026-03-09T21:11:34.584701+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:36.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:35 vm07 bash[20771]: cluster 2026-03-09T21:11:34.584701+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:36.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:35 vm07 bash[20771]: cluster 2026-03-09T21:11:34.584701+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:36.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:35 vm07 bash[28052]: cluster 2026-03-09T21:11:34.584701+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:36.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:35 vm07 bash[28052]: cluster 2026-03-09T21:11:34.584701+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:38.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:37 vm10 bash[23387]: cluster 2026-03-09T21:11:36.584887+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:38.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:37 vm10 bash[23387]: cluster 2026-03-09T21:11:36.584887+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:38.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:37 vm10 bash[23387]: audit 2026-03-09T21:11:37.063191+0000 mon.c (mon.2) 3 : audit [INF] from='osd.0 v2:192.168.123.107:6801/2141296969' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T21:11:38.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:37 vm10 bash[23387]: audit 2026-03-09T21:11:37.063191+0000 mon.c (mon.2) 3 : audit [INF] from='osd.0 v2:192.168.123.107:6801/2141296969' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T21:11:38.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:37 vm10 bash[23387]: audit 2026-03-09T21:11:37.063469+0000 mon.a (mon.0) 280 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T21:11:38.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:37 vm10 bash[23387]: audit 2026-03-09T21:11:37.063469+0000 mon.a (mon.0) 280 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T21:11:38.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:37 vm07 bash[20771]: cluster 2026-03-09T21:11:36.584887+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:38.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:37 vm07 bash[20771]: cluster 2026-03-09T21:11:36.584887+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:38.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:37 vm07 bash[20771]: audit 2026-03-09T21:11:37.063191+0000 mon.c (mon.2) 3 : audit [INF] from='osd.0 v2:192.168.123.107:6801/2141296969' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T21:11:38.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:37 vm07 bash[20771]: audit 2026-03-09T21:11:37.063191+0000 mon.c (mon.2) 3 : audit [INF] from='osd.0 v2:192.168.123.107:6801/2141296969' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T21:11:38.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:37 vm07 bash[20771]: audit 2026-03-09T21:11:37.063469+0000 mon.a (mon.0) 280 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T21:11:38.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:37 vm07 bash[20771]: audit 2026-03-09T21:11:37.063469+0000 mon.a (mon.0) 280 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T21:11:38.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:37 vm07 bash[28052]: cluster 2026-03-09T21:11:36.584887+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:38.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:37 vm07 bash[28052]: cluster 2026-03-09T21:11:36.584887+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:38.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:37 vm07 bash[28052]: audit 2026-03-09T21:11:37.063191+0000 mon.c (mon.2) 3 : audit [INF] from='osd.0 v2:192.168.123.107:6801/2141296969' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T21:11:38.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:37 vm07 bash[28052]: audit 2026-03-09T21:11:37.063191+0000 mon.c (mon.2) 3 : audit [INF] from='osd.0 v2:192.168.123.107:6801/2141296969' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T21:11:38.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:37 vm07 bash[28052]: audit 2026-03-09T21:11:37.063469+0000 mon.a (mon.0) 280 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T21:11:38.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:37 vm07 bash[28052]: audit 2026-03-09T21:11:37.063469+0000 mon.a (mon.0) 280 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T21:11:39.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:38 vm10 bash[23387]: audit 2026-03-09T21:11:37.916292+0000 mon.a (mon.0) 281 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T21:11:39.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:38 vm10 bash[23387]: audit 2026-03-09T21:11:37.916292+0000 mon.a (mon.0) 281 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T21:11:39.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:38 vm10 bash[23387]: cluster 2026-03-09T21:11:37.919309+0000 mon.a (mon.0) 282 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T21:11:39.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:38 vm10 bash[23387]: cluster 2026-03-09T21:11:37.919309+0000 mon.a (mon.0) 282 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T21:11:39.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:38 vm10 bash[23387]: audit 2026-03-09T21:11:37.919470+0000 mon.a (mon.0) 283 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:39.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:38 vm10 bash[23387]: audit 2026-03-09T21:11:37.919470+0000 mon.a (mon.0) 283 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:39.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:38 vm10 bash[23387]: audit 2026-03-09T21:11:37.919835+0000 mon.c (mon.2) 4 : audit [INF] from='osd.0 v2:192.168.123.107:6801/2141296969' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:11:39.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:38 vm10 bash[23387]: audit 2026-03-09T21:11:37.919835+0000 mon.c (mon.2) 4 : audit [INF] from='osd.0 v2:192.168.123.107:6801/2141296969' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:11:39.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:38 vm10 bash[23387]: audit 2026-03-09T21:11:37.920075+0000 mon.a (mon.0) 284 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:11:39.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:38 vm10 bash[23387]: audit 2026-03-09T21:11:37.920075+0000 mon.a (mon.0) 284 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:11:39.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:38 vm07 bash[20771]: audit 2026-03-09T21:11:37.916292+0000 mon.a (mon.0) 281 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T21:11:39.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:38 vm07 bash[20771]: audit 2026-03-09T21:11:37.916292+0000 mon.a (mon.0) 281 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T21:11:39.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:38 vm07 bash[20771]: cluster 2026-03-09T21:11:37.919309+0000 mon.a (mon.0) 282 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T21:11:39.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:38 vm07 bash[20771]: cluster 2026-03-09T21:11:37.919309+0000 mon.a (mon.0) 282 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T21:11:39.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:38 vm07 bash[20771]: audit 2026-03-09T21:11:37.919470+0000 mon.a (mon.0) 283 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:39.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:38 vm07 bash[20771]: audit 2026-03-09T21:11:37.919470+0000 mon.a (mon.0) 283 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:39.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:38 vm07 bash[20771]: audit 2026-03-09T21:11:37.919835+0000 mon.c (mon.2) 4 : audit [INF] from='osd.0 v2:192.168.123.107:6801/2141296969' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:11:39.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:38 vm07 bash[20771]: audit 2026-03-09T21:11:37.919835+0000 mon.c (mon.2) 4 : audit [INF] from='osd.0 v2:192.168.123.107:6801/2141296969' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:11:39.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:38 vm07 bash[20771]: audit 2026-03-09T21:11:37.920075+0000 mon.a (mon.0) 284 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:11:39.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:38 vm07 bash[20771]: audit 2026-03-09T21:11:37.920075+0000 mon.a (mon.0) 284 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:11:39.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:38 vm07 bash[28052]: audit 2026-03-09T21:11:37.916292+0000 mon.a (mon.0) 281 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T21:11:39.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:38 vm07 bash[28052]: audit 2026-03-09T21:11:37.916292+0000 mon.a (mon.0) 281 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T21:11:39.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:38 vm07 bash[28052]: cluster 2026-03-09T21:11:37.919309+0000 mon.a (mon.0) 282 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T21:11:39.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:38 vm07 bash[28052]: cluster 2026-03-09T21:11:37.919309+0000 mon.a (mon.0) 282 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T21:11:39.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:38 vm07 bash[28052]: audit 2026-03-09T21:11:37.919470+0000 mon.a (mon.0) 283 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:39.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:38 vm07 bash[28052]: audit 2026-03-09T21:11:37.919470+0000 mon.a (mon.0) 283 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:39.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:38 vm07 bash[28052]: audit 2026-03-09T21:11:37.919835+0000 mon.c (mon.2) 4 : audit [INF] from='osd.0 v2:192.168.123.107:6801/2141296969' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:11:39.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:38 vm07 bash[28052]: audit 2026-03-09T21:11:37.919835+0000 mon.c (mon.2) 4 : audit [INF] from='osd.0 v2:192.168.123.107:6801/2141296969' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:11:39.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:38 vm07 bash[28052]: audit 2026-03-09T21:11:37.920075+0000 mon.a (mon.0) 284 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:11:39.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:38 vm07 bash[28052]: audit 2026-03-09T21:11:37.920075+0000 mon.a (mon.0) 284 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:11:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:40 vm10 bash[23387]: cluster 2026-03-09T21:11:38.585049+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:40 vm10 bash[23387]: cluster 2026-03-09T21:11:38.585049+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:40 vm10 bash[23387]: audit 2026-03-09T21:11:38.919462+0000 mon.a (mon.0) 285 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T21:11:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:40 vm10 bash[23387]: audit 2026-03-09T21:11:38.919462+0000 mon.a (mon.0) 285 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T21:11:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:40 vm10 bash[23387]: cluster 2026-03-09T21:11:38.925095+0000 mon.a (mon.0) 286 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T21:11:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:40 vm10 bash[23387]: cluster 2026-03-09T21:11:38.925095+0000 mon.a (mon.0) 286 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T21:11:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:40 vm10 bash[23387]: audit 2026-03-09T21:11:38.926147+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:40 vm10 bash[23387]: audit 2026-03-09T21:11:38.926147+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:40 vm10 bash[23387]: audit 2026-03-09T21:11:38.927493+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:40 vm10 bash[23387]: audit 2026-03-09T21:11:38.927493+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:40 vm10 bash[23387]: audit 2026-03-09T21:11:39.926477+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:40 vm10 bash[23387]: audit 2026-03-09T21:11:39.926477+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:40.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:40 vm07 bash[20771]: cluster 2026-03-09T21:11:38.585049+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:40.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:40 vm07 bash[20771]: cluster 2026-03-09T21:11:38.585049+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:40.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:40 vm07 bash[20771]: audit 2026-03-09T21:11:38.919462+0000 mon.a (mon.0) 285 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T21:11:40.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:40 vm07 bash[20771]: audit 2026-03-09T21:11:38.919462+0000 mon.a (mon.0) 285 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T21:11:40.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:40 vm07 bash[20771]: cluster 2026-03-09T21:11:38.925095+0000 mon.a (mon.0) 286 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T21:11:40.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:40 vm07 bash[20771]: cluster 2026-03-09T21:11:38.925095+0000 mon.a (mon.0) 286 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T21:11:40.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:40 vm07 bash[20771]: audit 2026-03-09T21:11:38.926147+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:40.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:40 vm07 bash[20771]: audit 2026-03-09T21:11:38.926147+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:40.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:40 vm07 bash[20771]: audit 2026-03-09T21:11:38.927493+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:40.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:40 vm07 bash[20771]: audit 2026-03-09T21:11:38.927493+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:40.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:40 vm07 bash[20771]: audit 2026-03-09T21:11:39.926477+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:40.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:40 vm07 bash[20771]: audit 2026-03-09T21:11:39.926477+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:40.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:40 vm07 bash[28052]: cluster 2026-03-09T21:11:38.585049+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:40.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:40 vm07 bash[28052]: cluster 2026-03-09T21:11:38.585049+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:40.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:40 vm07 bash[28052]: audit 2026-03-09T21:11:38.919462+0000 mon.a (mon.0) 285 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T21:11:40.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:40 vm07 bash[28052]: audit 2026-03-09T21:11:38.919462+0000 mon.a (mon.0) 285 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T21:11:40.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:40 vm07 bash[28052]: cluster 2026-03-09T21:11:38.925095+0000 mon.a (mon.0) 286 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T21:11:40.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:40 vm07 bash[28052]: cluster 2026-03-09T21:11:38.925095+0000 mon.a (mon.0) 286 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T21:11:40.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:40 vm07 bash[28052]: audit 2026-03-09T21:11:38.926147+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:40.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:40 vm07 bash[28052]: audit 2026-03-09T21:11:38.926147+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:40.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:40 vm07 bash[28052]: audit 2026-03-09T21:11:38.927493+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:40.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:40 vm07 bash[28052]: audit 2026-03-09T21:11:38.927493+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:40.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:40 vm07 bash[28052]: audit 2026-03-09T21:11:39.926477+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:40.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:40 vm07 bash[28052]: audit 2026-03-09T21:11:39.926477+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:41.212 INFO:teuthology.orchestra.run.vm07.stdout:Created osd(s) 0 on host 'vm07' 2026-03-09T21:11:41.287 DEBUG:teuthology.orchestra.run.vm07:osd.0> sudo journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@osd.0.service 2026-03-09T21:11:41.288 INFO:tasks.cephadm:Deploying osd.1 on vm07 with /dev/vdd... 2026-03-09T21:11:41.288 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- lvm zap /dev/vdd 2026-03-09T21:11:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:41 vm10 bash[23387]: cluster 2026-03-09T21:11:38.098502+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:11:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:41 vm10 bash[23387]: cluster 2026-03-09T21:11:38.098502+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:11:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:41 vm10 bash[23387]: cluster 2026-03-09T21:11:38.098559+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:11:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:41 vm10 bash[23387]: cluster 2026-03-09T21:11:38.098559+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:11:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:41 vm10 bash[23387]: cluster 2026-03-09T21:11:40.117050+0000 mon.a (mon.0) 290 : cluster [INF] osd.0 v2:192.168.123.107:6801/2141296969 boot 2026-03-09T21:11:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:41 vm10 bash[23387]: cluster 2026-03-09T21:11:40.117050+0000 mon.a (mon.0) 290 : cluster [INF] osd.0 v2:192.168.123.107:6801/2141296969 boot 2026-03-09T21:11:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:41 vm10 bash[23387]: cluster 2026-03-09T21:11:40.117088+0000 mon.a (mon.0) 291 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T21:11:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:41 vm10 bash[23387]: cluster 2026-03-09T21:11:40.117088+0000 mon.a (mon.0) 291 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T21:11:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:41 vm10 bash[23387]: audit 2026-03-09T21:11:40.118064+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:41 vm10 bash[23387]: audit 2026-03-09T21:11:40.118064+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:41 vm10 bash[23387]: audit 2026-03-09T21:11:40.130991+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:41 vm10 bash[23387]: audit 2026-03-09T21:11:40.130991+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:41 vm10 bash[23387]: audit 2026-03-09T21:11:40.135500+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:41 vm10 bash[23387]: audit 2026-03-09T21:11:40.135500+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:41 vm10 bash[23387]: audit 2026-03-09T21:11:40.520958+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:41 vm10 bash[23387]: audit 2026-03-09T21:11:40.520958+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:41 vm10 bash[23387]: audit 2026-03-09T21:11:40.521436+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:11:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:41 vm10 bash[23387]: audit 2026-03-09T21:11:40.521436+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:11:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:41 vm10 bash[23387]: audit 2026-03-09T21:11:40.525556+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:41 vm10 bash[23387]: audit 2026-03-09T21:11:40.525556+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:41.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:41 vm07 bash[20771]: cluster 2026-03-09T21:11:38.098502+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:11:41.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:41 vm07 bash[20771]: cluster 2026-03-09T21:11:38.098502+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:11:41.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:41 vm07 bash[20771]: cluster 2026-03-09T21:11:38.098559+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:11:41.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:41 vm07 bash[20771]: cluster 2026-03-09T21:11:38.098559+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:11:41.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:41 vm07 bash[20771]: cluster 2026-03-09T21:11:40.117050+0000 mon.a (mon.0) 290 : cluster [INF] osd.0 v2:192.168.123.107:6801/2141296969 boot 2026-03-09T21:11:41.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:41 vm07 bash[20771]: cluster 2026-03-09T21:11:40.117050+0000 mon.a (mon.0) 290 : cluster [INF] osd.0 v2:192.168.123.107:6801/2141296969 boot 2026-03-09T21:11:41.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:41 vm07 bash[20771]: cluster 2026-03-09T21:11:40.117088+0000 mon.a (mon.0) 291 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T21:11:41.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:41 vm07 bash[20771]: cluster 2026-03-09T21:11:40.117088+0000 mon.a (mon.0) 291 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T21:11:41.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:41 vm07 bash[20771]: audit 2026-03-09T21:11:40.118064+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:41.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:41 vm07 bash[20771]: audit 2026-03-09T21:11:40.118064+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:41.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:41 vm07 bash[20771]: audit 2026-03-09T21:11:40.130991+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:41.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:41 vm07 bash[20771]: audit 2026-03-09T21:11:40.130991+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:41.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:41 vm07 bash[20771]: audit 2026-03-09T21:11:40.135500+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:41.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:41 vm07 bash[20771]: audit 2026-03-09T21:11:40.135500+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:41.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:41 vm07 bash[20771]: audit 2026-03-09T21:11:40.520958+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:41.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:41 vm07 bash[20771]: audit 2026-03-09T21:11:40.520958+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:41.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:41 vm07 bash[20771]: audit 2026-03-09T21:11:40.521436+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:11:41.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:41 vm07 bash[20771]: audit 2026-03-09T21:11:40.521436+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:11:41.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:41 vm07 bash[20771]: audit 2026-03-09T21:11:40.525556+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:41.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:41 vm07 bash[20771]: audit 2026-03-09T21:11:40.525556+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:41.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:41 vm07 bash[28052]: cluster 2026-03-09T21:11:38.098502+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:11:41.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:41 vm07 bash[28052]: cluster 2026-03-09T21:11:38.098502+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:11:41.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:41 vm07 bash[28052]: cluster 2026-03-09T21:11:38.098559+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:11:41.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:41 vm07 bash[28052]: cluster 2026-03-09T21:11:38.098559+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:11:41.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:41 vm07 bash[28052]: cluster 2026-03-09T21:11:40.117050+0000 mon.a (mon.0) 290 : cluster [INF] osd.0 v2:192.168.123.107:6801/2141296969 boot 2026-03-09T21:11:41.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:41 vm07 bash[28052]: cluster 2026-03-09T21:11:40.117050+0000 mon.a (mon.0) 290 : cluster [INF] osd.0 v2:192.168.123.107:6801/2141296969 boot 2026-03-09T21:11:41.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:41 vm07 bash[28052]: cluster 2026-03-09T21:11:40.117088+0000 mon.a (mon.0) 291 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T21:11:41.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:41 vm07 bash[28052]: cluster 2026-03-09T21:11:40.117088+0000 mon.a (mon.0) 291 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T21:11:41.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:41 vm07 bash[28052]: audit 2026-03-09T21:11:40.118064+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:41.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:41 vm07 bash[28052]: audit 2026-03-09T21:11:40.118064+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:11:41.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:41 vm07 bash[28052]: audit 2026-03-09T21:11:40.130991+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:41.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:41 vm07 bash[28052]: audit 2026-03-09T21:11:40.130991+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:41.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:41 vm07 bash[28052]: audit 2026-03-09T21:11:40.135500+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:41.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:41 vm07 bash[28052]: audit 2026-03-09T21:11:40.135500+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:41.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:41 vm07 bash[28052]: audit 2026-03-09T21:11:40.520958+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:41.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:41 vm07 bash[28052]: audit 2026-03-09T21:11:40.520958+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:41.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:41 vm07 bash[28052]: audit 2026-03-09T21:11:40.521436+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:11:41.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:41 vm07 bash[28052]: audit 2026-03-09T21:11:40.521436+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:11:41.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:41 vm07 bash[28052]: audit 2026-03-09T21:11:40.525556+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:41.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:41 vm07 bash[28052]: audit 2026-03-09T21:11:40.525556+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:42.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:42 vm10 bash[23387]: cluster 2026-03-09T21:11:40.585198+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:42.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:42 vm10 bash[23387]: cluster 2026-03-09T21:11:40.585198+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:42.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:42 vm10 bash[23387]: audit 2026-03-09T21:11:41.193406+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:11:42.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:42 vm10 bash[23387]: audit 2026-03-09T21:11:41.193406+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:11:42.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:42 vm10 bash[23387]: audit 2026-03-09T21:11:41.199262+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:42.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:42 vm10 bash[23387]: audit 2026-03-09T21:11:41.199262+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:42.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:42 vm10 bash[23387]: audit 2026-03-09T21:11:41.205488+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:42.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:42 vm10 bash[23387]: audit 2026-03-09T21:11:41.205488+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:42.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:42 vm10 bash[23387]: cluster 2026-03-09T21:11:41.535979+0000 mon.a (mon.0) 301 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T21:11:42.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:42 vm10 bash[23387]: cluster 2026-03-09T21:11:41.535979+0000 mon.a (mon.0) 301 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T21:11:42.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:42 vm07 bash[20771]: cluster 2026-03-09T21:11:40.585198+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:42.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:42 vm07 bash[20771]: cluster 2026-03-09T21:11:40.585198+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:42.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:42 vm07 bash[20771]: audit 2026-03-09T21:11:41.193406+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:11:42.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:42 vm07 bash[20771]: audit 2026-03-09T21:11:41.193406+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:11:42.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:42 vm07 bash[20771]: audit 2026-03-09T21:11:41.199262+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:42.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:42 vm07 bash[20771]: audit 2026-03-09T21:11:41.199262+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:42.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:42 vm07 bash[20771]: audit 2026-03-09T21:11:41.205488+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:42.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:42 vm07 bash[20771]: audit 2026-03-09T21:11:41.205488+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:42.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:42 vm07 bash[20771]: cluster 2026-03-09T21:11:41.535979+0000 mon.a (mon.0) 301 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T21:11:42.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:42 vm07 bash[20771]: cluster 2026-03-09T21:11:41.535979+0000 mon.a (mon.0) 301 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T21:11:42.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:42 vm07 bash[28052]: cluster 2026-03-09T21:11:40.585198+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:42.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:42 vm07 bash[28052]: cluster 2026-03-09T21:11:40.585198+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T21:11:42.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:42 vm07 bash[28052]: audit 2026-03-09T21:11:41.193406+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:11:42.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:42 vm07 bash[28052]: audit 2026-03-09T21:11:41.193406+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:11:42.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:42 vm07 bash[28052]: audit 2026-03-09T21:11:41.199262+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:42.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:42 vm07 bash[28052]: audit 2026-03-09T21:11:41.199262+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:42.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:42 vm07 bash[28052]: audit 2026-03-09T21:11:41.205488+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:42.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:42 vm07 bash[28052]: audit 2026-03-09T21:11:41.205488+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:42.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:42 vm07 bash[28052]: cluster 2026-03-09T21:11:41.535979+0000 mon.a (mon.0) 301 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T21:11:42.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:42 vm07 bash[28052]: cluster 2026-03-09T21:11:41.535979+0000 mon.a (mon.0) 301 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T21:11:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:44 vm10 bash[23387]: cluster 2026-03-09T21:11:42.585431+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:44 vm10 bash[23387]: cluster 2026-03-09T21:11:42.585431+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:44.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:44 vm07 bash[20771]: cluster 2026-03-09T21:11:42.585431+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:44.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:44 vm07 bash[20771]: cluster 2026-03-09T21:11:42.585431+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:44.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:44 vm07 bash[28052]: cluster 2026-03-09T21:11:42.585431+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:44.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:44 vm07 bash[28052]: cluster 2026-03-09T21:11:42.585431+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:45.950 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:11:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:46 vm07 bash[20771]: cluster 2026-03-09T21:11:44.585675+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:46.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:46 vm07 bash[20771]: cluster 2026-03-09T21:11:44.585675+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:46.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:46 vm07 bash[28052]: cluster 2026-03-09T21:11:44.585675+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:46.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:46 vm07 bash[28052]: cluster 2026-03-09T21:11:44.585675+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:46.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:46 vm10 bash[23387]: cluster 2026-03-09T21:11:44.585675+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:46.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:46 vm10 bash[23387]: cluster 2026-03-09T21:11:44.585675+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:47.541 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:11:47.555 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph orch daemon add osd vm07:/dev/vdd 2026-03-09T21:11:47.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:47 vm07 bash[20771]: cluster 2026-03-09T21:11:46.586070+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:47.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:47 vm07 bash[20771]: cluster 2026-03-09T21:11:46.586070+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:47.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:47 vm07 bash[20771]: cephadm 2026-03-09T21:11:46.734770+0000 mgr.y (mgr.14150) 81 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T21:11:47.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:47 vm07 bash[20771]: cephadm 2026-03-09T21:11:46.734770+0000 mgr.y (mgr.14150) 81 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T21:11:47.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:47 vm07 bash[20771]: audit 2026-03-09T21:11:46.740861+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:47.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:47 vm07 bash[20771]: audit 2026-03-09T21:11:46.740861+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:47.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:47 vm07 bash[20771]: audit 2026-03-09T21:11:46.746970+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:47.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:47 vm07 bash[20771]: audit 2026-03-09T21:11:46.746970+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:47.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:47 vm07 bash[20771]: audit 2026-03-09T21:11:46.747787+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:11:47.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:47 vm07 bash[20771]: audit 2026-03-09T21:11:46.747787+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:11:47.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:47 vm07 bash[20771]: audit 2026-03-09T21:11:46.748427+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:47.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:47 vm07 bash[20771]: audit 2026-03-09T21:11:46.748427+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:47.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:47 vm07 bash[20771]: audit 2026-03-09T21:11:46.748819+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:11:47.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:47 vm07 bash[20771]: audit 2026-03-09T21:11:46.748819+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:11:47.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:47 vm07 bash[20771]: audit 2026-03-09T21:11:46.752966+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:47.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:47 vm07 bash[20771]: audit 2026-03-09T21:11:46.752966+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:47.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:47 vm07 bash[28052]: cluster 2026-03-09T21:11:46.586070+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:47.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:47 vm07 bash[28052]: cluster 2026-03-09T21:11:46.586070+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:47.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:47 vm07 bash[28052]: cephadm 2026-03-09T21:11:46.734770+0000 mgr.y (mgr.14150) 81 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T21:11:47.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:47 vm07 bash[28052]: cephadm 2026-03-09T21:11:46.734770+0000 mgr.y (mgr.14150) 81 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T21:11:47.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:47 vm07 bash[28052]: audit 2026-03-09T21:11:46.740861+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:47.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:47 vm07 bash[28052]: audit 2026-03-09T21:11:46.740861+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:47.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:47 vm07 bash[28052]: audit 2026-03-09T21:11:46.746970+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:47.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:47 vm07 bash[28052]: audit 2026-03-09T21:11:46.746970+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:47.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:47 vm07 bash[28052]: audit 2026-03-09T21:11:46.747787+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:11:47.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:47 vm07 bash[28052]: audit 2026-03-09T21:11:46.747787+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:11:47.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:47 vm07 bash[28052]: audit 2026-03-09T21:11:46.748427+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:47.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:47 vm07 bash[28052]: audit 2026-03-09T21:11:46.748427+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:47.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:47 vm07 bash[28052]: audit 2026-03-09T21:11:46.748819+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:11:47.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:47 vm07 bash[28052]: audit 2026-03-09T21:11:46.748819+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:11:47.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:47 vm07 bash[28052]: audit 2026-03-09T21:11:46.752966+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:47.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:47 vm07 bash[28052]: audit 2026-03-09T21:11:46.752966+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:47 vm10 bash[23387]: cluster 2026-03-09T21:11:46.586070+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:47 vm10 bash[23387]: cluster 2026-03-09T21:11:46.586070+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:47 vm10 bash[23387]: cephadm 2026-03-09T21:11:46.734770+0000 mgr.y (mgr.14150) 81 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T21:11:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:47 vm10 bash[23387]: cephadm 2026-03-09T21:11:46.734770+0000 mgr.y (mgr.14150) 81 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T21:11:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:47 vm10 bash[23387]: audit 2026-03-09T21:11:46.740861+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:47 vm10 bash[23387]: audit 2026-03-09T21:11:46.740861+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:47 vm10 bash[23387]: audit 2026-03-09T21:11:46.746970+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:47 vm10 bash[23387]: audit 2026-03-09T21:11:46.746970+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:47 vm10 bash[23387]: audit 2026-03-09T21:11:46.747787+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:11:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:47 vm10 bash[23387]: audit 2026-03-09T21:11:46.747787+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:11:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:47 vm10 bash[23387]: audit 2026-03-09T21:11:46.748427+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:47 vm10 bash[23387]: audit 2026-03-09T21:11:46.748427+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:47 vm10 bash[23387]: audit 2026-03-09T21:11:46.748819+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:11:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:47 vm10 bash[23387]: audit 2026-03-09T21:11:46.748819+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:11:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:47 vm10 bash[23387]: audit 2026-03-09T21:11:46.752966+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:47 vm10 bash[23387]: audit 2026-03-09T21:11:46.752966+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:11:50.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:49 vm07 bash[20771]: cluster 2026-03-09T21:11:48.586306+0000 mgr.y (mgr.14150) 82 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:50.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:49 vm07 bash[20771]: cluster 2026-03-09T21:11:48.586306+0000 mgr.y (mgr.14150) 82 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:50.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:49 vm07 bash[28052]: cluster 2026-03-09T21:11:48.586306+0000 mgr.y (mgr.14150) 82 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:50.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:49 vm07 bash[28052]: cluster 2026-03-09T21:11:48.586306+0000 mgr.y (mgr.14150) 82 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:49 vm10 bash[23387]: cluster 2026-03-09T21:11:48.586306+0000 mgr.y (mgr.14150) 82 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:49 vm10 bash[23387]: cluster 2026-03-09T21:11:48.586306+0000 mgr.y (mgr.14150) 82 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:52.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:51 vm07 bash[20771]: cluster 2026-03-09T21:11:50.586500+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:52.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:51 vm07 bash[20771]: cluster 2026-03-09T21:11:50.586500+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:52.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:51 vm07 bash[28052]: cluster 2026-03-09T21:11:50.586500+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:52.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:51 vm07 bash[28052]: cluster 2026-03-09T21:11:50.586500+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:52.166 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:11:52.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:51 vm10 bash[23387]: cluster 2026-03-09T21:11:50.586500+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:52.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:51 vm10 bash[23387]: cluster 2026-03-09T21:11:50.586500+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:53.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:52 vm07 bash[20771]: audit 2026-03-09T21:11:52.433053+0000 mgr.y (mgr.14150) 84 : audit [DBG] from='client.14220 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:11:53.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:52 vm07 bash[20771]: audit 2026-03-09T21:11:52.433053+0000 mgr.y (mgr.14150) 84 : audit [DBG] from='client.14220 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:11:53.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:52 vm07 bash[20771]: audit 2026-03-09T21:11:52.434505+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:11:53.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:52 vm07 bash[20771]: audit 2026-03-09T21:11:52.434505+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:11:53.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:52 vm07 bash[20771]: audit 2026-03-09T21:11:52.436196+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:11:53.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:52 vm07 bash[20771]: audit 2026-03-09T21:11:52.436196+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:11:53.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:52 vm07 bash[20771]: audit 2026-03-09T21:11:52.436735+0000 mon.a (mon.0) 310 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:53.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:52 vm07 bash[20771]: audit 2026-03-09T21:11:52.436735+0000 mon.a (mon.0) 310 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:53.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:52 vm07 bash[28052]: audit 2026-03-09T21:11:52.433053+0000 mgr.y (mgr.14150) 84 : audit [DBG] from='client.14220 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:11:53.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:52 vm07 bash[28052]: audit 2026-03-09T21:11:52.433053+0000 mgr.y (mgr.14150) 84 : audit [DBG] from='client.14220 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:11:53.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:52 vm07 bash[28052]: audit 2026-03-09T21:11:52.434505+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:11:53.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:52 vm07 bash[28052]: audit 2026-03-09T21:11:52.434505+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:11:53.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:52 vm07 bash[28052]: audit 2026-03-09T21:11:52.436196+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:11:53.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:52 vm07 bash[28052]: audit 2026-03-09T21:11:52.436196+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:11:53.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:52 vm07 bash[28052]: audit 2026-03-09T21:11:52.436735+0000 mon.a (mon.0) 310 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:53.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:52 vm07 bash[28052]: audit 2026-03-09T21:11:52.436735+0000 mon.a (mon.0) 310 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:53.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:52 vm10 bash[23387]: audit 2026-03-09T21:11:52.433053+0000 mgr.y (mgr.14150) 84 : audit [DBG] from='client.14220 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:11:53.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:52 vm10 bash[23387]: audit 2026-03-09T21:11:52.433053+0000 mgr.y (mgr.14150) 84 : audit [DBG] from='client.14220 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:11:53.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:52 vm10 bash[23387]: audit 2026-03-09T21:11:52.434505+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:11:53.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:52 vm10 bash[23387]: audit 2026-03-09T21:11:52.434505+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:11:53.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:52 vm10 bash[23387]: audit 2026-03-09T21:11:52.436196+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:11:53.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:52 vm10 bash[23387]: audit 2026-03-09T21:11:52.436196+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:11:53.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:52 vm10 bash[23387]: audit 2026-03-09T21:11:52.436735+0000 mon.a (mon.0) 310 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:53.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:52 vm10 bash[23387]: audit 2026-03-09T21:11:52.436735+0000 mon.a (mon.0) 310 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:11:54.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:53 vm07 bash[20771]: cluster 2026-03-09T21:11:52.586769+0000 mgr.y (mgr.14150) 85 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:54.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:53 vm07 bash[20771]: cluster 2026-03-09T21:11:52.586769+0000 mgr.y (mgr.14150) 85 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:54.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:53 vm07 bash[28052]: cluster 2026-03-09T21:11:52.586769+0000 mgr.y (mgr.14150) 85 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:54.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:53 vm07 bash[28052]: cluster 2026-03-09T21:11:52.586769+0000 mgr.y (mgr.14150) 85 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:54.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:53 vm10 bash[23387]: cluster 2026-03-09T21:11:52.586769+0000 mgr.y (mgr.14150) 85 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:54.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:53 vm10 bash[23387]: cluster 2026-03-09T21:11:52.586769+0000 mgr.y (mgr.14150) 85 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:56.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:55 vm07 bash[20771]: cluster 2026-03-09T21:11:54.587007+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:56.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:55 vm07 bash[20771]: cluster 2026-03-09T21:11:54.587007+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:56.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:55 vm07 bash[28052]: cluster 2026-03-09T21:11:54.587007+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:56.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:55 vm07 bash[28052]: cluster 2026-03-09T21:11:54.587007+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:56.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:55 vm10 bash[23387]: cluster 2026-03-09T21:11:54.587007+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:56.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:55 vm10 bash[23387]: cluster 2026-03-09T21:11:54.587007+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:57 vm07 bash[20771]: cluster 2026-03-09T21:11:56.587289+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:57 vm07 bash[20771]: cluster 2026-03-09T21:11:56.587289+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:58.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:57 vm07 bash[28052]: cluster 2026-03-09T21:11:56.587289+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:58.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:57 vm07 bash[28052]: cluster 2026-03-09T21:11:56.587289+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:58.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:57 vm10 bash[23387]: cluster 2026-03-09T21:11:56.587289+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:58.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:57 vm10 bash[23387]: cluster 2026-03-09T21:11:56.587289+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:11:59.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:58 vm07 bash[20771]: audit 2026-03-09T21:11:57.850038+0000 mon.c (mon.2) 5 : audit [INF] from='client.? 192.168.123.107:0/3659950488' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "98ca1795-9ed4-4ffb-8a3f-f26e615f554f"}]: dispatch 2026-03-09T21:11:59.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:58 vm07 bash[20771]: audit 2026-03-09T21:11:57.850038+0000 mon.c (mon.2) 5 : audit [INF] from='client.? 192.168.123.107:0/3659950488' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "98ca1795-9ed4-4ffb-8a3f-f26e615f554f"}]: dispatch 2026-03-09T21:11:59.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:58 vm07 bash[20771]: audit 2026-03-09T21:11:57.850385+0000 mon.a (mon.0) 311 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "98ca1795-9ed4-4ffb-8a3f-f26e615f554f"}]: dispatch 2026-03-09T21:11:59.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:58 vm07 bash[20771]: audit 2026-03-09T21:11:57.850385+0000 mon.a (mon.0) 311 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "98ca1795-9ed4-4ffb-8a3f-f26e615f554f"}]: dispatch 2026-03-09T21:11:59.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:58 vm07 bash[20771]: audit 2026-03-09T21:11:57.853525+0000 mon.a (mon.0) 312 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "98ca1795-9ed4-4ffb-8a3f-f26e615f554f"}]': finished 2026-03-09T21:11:59.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:58 vm07 bash[20771]: audit 2026-03-09T21:11:57.853525+0000 mon.a (mon.0) 312 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "98ca1795-9ed4-4ffb-8a3f-f26e615f554f"}]': finished 2026-03-09T21:11:59.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:58 vm07 bash[20771]: cluster 2026-03-09T21:11:57.856923+0000 mon.a (mon.0) 313 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T21:11:59.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:58 vm07 bash[20771]: cluster 2026-03-09T21:11:57.856923+0000 mon.a (mon.0) 313 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T21:11:59.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:58 vm07 bash[20771]: audit 2026-03-09T21:11:57.857227+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:11:59.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:58 vm07 bash[20771]: audit 2026-03-09T21:11:57.857227+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:11:59.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:58 vm07 bash[20771]: audit 2026-03-09T21:11:58.537259+0000 mon.a (mon.0) 315 : audit [DBG] from='client.? 192.168.123.107:0/2358869546' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:11:59.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:58 vm07 bash[20771]: audit 2026-03-09T21:11:58.537259+0000 mon.a (mon.0) 315 : audit [DBG] from='client.? 192.168.123.107:0/2358869546' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:11:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:58 vm07 bash[28052]: audit 2026-03-09T21:11:57.850038+0000 mon.c (mon.2) 5 : audit [INF] from='client.? 192.168.123.107:0/3659950488' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "98ca1795-9ed4-4ffb-8a3f-f26e615f554f"}]: dispatch 2026-03-09T21:11:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:58 vm07 bash[28052]: audit 2026-03-09T21:11:57.850038+0000 mon.c (mon.2) 5 : audit [INF] from='client.? 192.168.123.107:0/3659950488' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "98ca1795-9ed4-4ffb-8a3f-f26e615f554f"}]: dispatch 2026-03-09T21:11:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:58 vm07 bash[28052]: audit 2026-03-09T21:11:57.850385+0000 mon.a (mon.0) 311 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "98ca1795-9ed4-4ffb-8a3f-f26e615f554f"}]: dispatch 2026-03-09T21:11:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:58 vm07 bash[28052]: audit 2026-03-09T21:11:57.850385+0000 mon.a (mon.0) 311 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "98ca1795-9ed4-4ffb-8a3f-f26e615f554f"}]: dispatch 2026-03-09T21:11:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:58 vm07 bash[28052]: audit 2026-03-09T21:11:57.853525+0000 mon.a (mon.0) 312 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "98ca1795-9ed4-4ffb-8a3f-f26e615f554f"}]': finished 2026-03-09T21:11:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:58 vm07 bash[28052]: audit 2026-03-09T21:11:57.853525+0000 mon.a (mon.0) 312 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "98ca1795-9ed4-4ffb-8a3f-f26e615f554f"}]': finished 2026-03-09T21:11:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:58 vm07 bash[28052]: cluster 2026-03-09T21:11:57.856923+0000 mon.a (mon.0) 313 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T21:11:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:58 vm07 bash[28052]: cluster 2026-03-09T21:11:57.856923+0000 mon.a (mon.0) 313 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T21:11:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:58 vm07 bash[28052]: audit 2026-03-09T21:11:57.857227+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:11:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:58 vm07 bash[28052]: audit 2026-03-09T21:11:57.857227+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:11:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:58 vm07 bash[28052]: audit 2026-03-09T21:11:58.537259+0000 mon.a (mon.0) 315 : audit [DBG] from='client.? 192.168.123.107:0/2358869546' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:11:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:58 vm07 bash[28052]: audit 2026-03-09T21:11:58.537259+0000 mon.a (mon.0) 315 : audit [DBG] from='client.? 192.168.123.107:0/2358869546' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:11:59.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:58 vm10 bash[23387]: audit 2026-03-09T21:11:57.850038+0000 mon.c (mon.2) 5 : audit [INF] from='client.? 192.168.123.107:0/3659950488' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "98ca1795-9ed4-4ffb-8a3f-f26e615f554f"}]: dispatch 2026-03-09T21:11:59.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:58 vm10 bash[23387]: audit 2026-03-09T21:11:57.850038+0000 mon.c (mon.2) 5 : audit [INF] from='client.? 192.168.123.107:0/3659950488' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "98ca1795-9ed4-4ffb-8a3f-f26e615f554f"}]: dispatch 2026-03-09T21:11:59.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:58 vm10 bash[23387]: audit 2026-03-09T21:11:57.850385+0000 mon.a (mon.0) 311 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "98ca1795-9ed4-4ffb-8a3f-f26e615f554f"}]: dispatch 2026-03-09T21:11:59.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:58 vm10 bash[23387]: audit 2026-03-09T21:11:57.850385+0000 mon.a (mon.0) 311 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "98ca1795-9ed4-4ffb-8a3f-f26e615f554f"}]: dispatch 2026-03-09T21:11:59.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:58 vm10 bash[23387]: audit 2026-03-09T21:11:57.853525+0000 mon.a (mon.0) 312 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "98ca1795-9ed4-4ffb-8a3f-f26e615f554f"}]': finished 2026-03-09T21:11:59.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:58 vm10 bash[23387]: audit 2026-03-09T21:11:57.853525+0000 mon.a (mon.0) 312 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "98ca1795-9ed4-4ffb-8a3f-f26e615f554f"}]': finished 2026-03-09T21:11:59.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:58 vm10 bash[23387]: cluster 2026-03-09T21:11:57.856923+0000 mon.a (mon.0) 313 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T21:11:59.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:58 vm10 bash[23387]: cluster 2026-03-09T21:11:57.856923+0000 mon.a (mon.0) 313 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T21:11:59.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:58 vm10 bash[23387]: audit 2026-03-09T21:11:57.857227+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:11:59.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:58 vm10 bash[23387]: audit 2026-03-09T21:11:57.857227+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:11:59.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:58 vm10 bash[23387]: audit 2026-03-09T21:11:58.537259+0000 mon.a (mon.0) 315 : audit [DBG] from='client.? 192.168.123.107:0/2358869546' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:11:59.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:58 vm10 bash[23387]: audit 2026-03-09T21:11:58.537259+0000 mon.a (mon.0) 315 : audit [DBG] from='client.? 192.168.123.107:0/2358869546' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:12:00.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:59 vm07 bash[20771]: cluster 2026-03-09T21:11:58.587590+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:00.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:11:59 vm07 bash[20771]: cluster 2026-03-09T21:11:58.587590+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:00.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:59 vm07 bash[28052]: cluster 2026-03-09T21:11:58.587590+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:00.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:11:59 vm07 bash[28052]: cluster 2026-03-09T21:11:58.587590+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:00.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:59 vm10 bash[23387]: cluster 2026-03-09T21:11:58.587590+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:00.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:11:59 vm10 bash[23387]: cluster 2026-03-09T21:11:58.587590+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:02.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:01 vm07 bash[20771]: cluster 2026-03-09T21:12:00.587811+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:02.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:01 vm07 bash[20771]: cluster 2026-03-09T21:12:00.587811+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:02.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:01 vm07 bash[28052]: cluster 2026-03-09T21:12:00.587811+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:02.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:01 vm07 bash[28052]: cluster 2026-03-09T21:12:00.587811+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:02.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:01 vm10 bash[23387]: cluster 2026-03-09T21:12:00.587811+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:02.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:01 vm10 bash[23387]: cluster 2026-03-09T21:12:00.587811+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:04.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:03 vm10 bash[23387]: cluster 2026-03-09T21:12:02.588061+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:04.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:03 vm10 bash[23387]: cluster 2026-03-09T21:12:02.588061+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:04.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:03 vm07 bash[20771]: cluster 2026-03-09T21:12:02.588061+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:04.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:03 vm07 bash[20771]: cluster 2026-03-09T21:12:02.588061+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:04.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:03 vm07 bash[28052]: cluster 2026-03-09T21:12:02.588061+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:04.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:03 vm07 bash[28052]: cluster 2026-03-09T21:12:02.588061+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:06.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:05 vm10 bash[23387]: cluster 2026-03-09T21:12:04.588325+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:06.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:05 vm10 bash[23387]: cluster 2026-03-09T21:12:04.588325+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:06.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:05 vm07 bash[20771]: cluster 2026-03-09T21:12:04.588325+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:06.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:05 vm07 bash[20771]: cluster 2026-03-09T21:12:04.588325+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:06.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:05 vm07 bash[28052]: cluster 2026-03-09T21:12:04.588325+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:06.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:05 vm07 bash[28052]: cluster 2026-03-09T21:12:04.588325+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:07.786 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:07 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:12:07.786 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:12:07 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:12:07.786 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:07 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:12:07.786 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:12:07 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:12:08.067 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:12:07 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:12:08.068 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:07 vm07 bash[20771]: cluster 2026-03-09T21:12:06.588587+0000 mgr.y (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:08.068 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:07 vm07 bash[20771]: cluster 2026-03-09T21:12:06.588587+0000 mgr.y (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:08.068 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:07 vm07 bash[20771]: audit 2026-03-09T21:12:06.955037+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T21:12:08.068 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:07 vm07 bash[20771]: audit 2026-03-09T21:12:06.955037+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T21:12:08.068 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:07 vm07 bash[20771]: audit 2026-03-09T21:12:06.955552+0000 mon.a (mon.0) 317 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:08.068 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:07 vm07 bash[20771]: audit 2026-03-09T21:12:06.955552+0000 mon.a (mon.0) 317 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:08.068 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:07 vm07 bash[20771]: cephadm 2026-03-09T21:12:06.955974+0000 mgr.y (mgr.14150) 93 : cephadm [INF] Deploying daemon osd.1 on vm07 2026-03-09T21:12:08.068 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:07 vm07 bash[20771]: cephadm 2026-03-09T21:12:06.955974+0000 mgr.y (mgr.14150) 93 : cephadm [INF] Deploying daemon osd.1 on vm07 2026-03-09T21:12:08.068 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:07 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:12:08.068 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:12:07 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:12:08.068 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:07 vm07 bash[28052]: cluster 2026-03-09T21:12:06.588587+0000 mgr.y (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:08.068 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:07 vm07 bash[28052]: cluster 2026-03-09T21:12:06.588587+0000 mgr.y (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:08.068 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:07 vm07 bash[28052]: audit 2026-03-09T21:12:06.955037+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T21:12:08.068 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:07 vm07 bash[28052]: audit 2026-03-09T21:12:06.955037+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T21:12:08.068 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:07 vm07 bash[28052]: audit 2026-03-09T21:12:06.955552+0000 mon.a (mon.0) 317 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:08.068 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:07 vm07 bash[28052]: audit 2026-03-09T21:12:06.955552+0000 mon.a (mon.0) 317 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:08.068 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:07 vm07 bash[28052]: cephadm 2026-03-09T21:12:06.955974+0000 mgr.y (mgr.14150) 93 : cephadm [INF] Deploying daemon osd.1 on vm07 2026-03-09T21:12:08.068 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:07 vm07 bash[28052]: cephadm 2026-03-09T21:12:06.955974+0000 mgr.y (mgr.14150) 93 : cephadm [INF] Deploying daemon osd.1 on vm07 2026-03-09T21:12:08.068 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:07 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:12:08.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:07 vm10 bash[23387]: cluster 2026-03-09T21:12:06.588587+0000 mgr.y (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:08.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:07 vm10 bash[23387]: cluster 2026-03-09T21:12:06.588587+0000 mgr.y (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:08.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:07 vm10 bash[23387]: audit 2026-03-09T21:12:06.955037+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T21:12:08.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:07 vm10 bash[23387]: audit 2026-03-09T21:12:06.955037+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T21:12:08.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:07 vm10 bash[23387]: audit 2026-03-09T21:12:06.955552+0000 mon.a (mon.0) 317 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:08.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:07 vm10 bash[23387]: audit 2026-03-09T21:12:06.955552+0000 mon.a (mon.0) 317 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:08.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:07 vm10 bash[23387]: cephadm 2026-03-09T21:12:06.955974+0000 mgr.y (mgr.14150) 93 : cephadm [INF] Deploying daemon osd.1 on vm07 2026-03-09T21:12:08.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:07 vm10 bash[23387]: cephadm 2026-03-09T21:12:06.955974+0000 mgr.y (mgr.14150) 93 : cephadm [INF] Deploying daemon osd.1 on vm07 2026-03-09T21:12:09.182 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:08 vm07 bash[20771]: audit 2026-03-09T21:12:08.044293+0000 mon.a (mon.0) 318 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:12:09.182 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:08 vm07 bash[20771]: audit 2026-03-09T21:12:08.044293+0000 mon.a (mon.0) 318 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:12:09.182 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:08 vm07 bash[20771]: audit 2026-03-09T21:12:08.049214+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:09.182 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:08 vm07 bash[20771]: audit 2026-03-09T21:12:08.049214+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:09.182 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:08 vm07 bash[20771]: audit 2026-03-09T21:12:08.056697+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:09.182 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:08 vm07 bash[20771]: audit 2026-03-09T21:12:08.056697+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:09.182 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:08 vm07 bash[28052]: audit 2026-03-09T21:12:08.044293+0000 mon.a (mon.0) 318 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:12:09.182 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:08 vm07 bash[28052]: audit 2026-03-09T21:12:08.044293+0000 mon.a (mon.0) 318 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:12:09.182 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:08 vm07 bash[28052]: audit 2026-03-09T21:12:08.049214+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:09.182 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:08 vm07 bash[28052]: audit 2026-03-09T21:12:08.049214+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:09.182 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:08 vm07 bash[28052]: audit 2026-03-09T21:12:08.056697+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:09.182 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:08 vm07 bash[28052]: audit 2026-03-09T21:12:08.056697+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:09.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:08 vm10 bash[23387]: audit 2026-03-09T21:12:08.044293+0000 mon.a (mon.0) 318 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:12:09.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:08 vm10 bash[23387]: audit 2026-03-09T21:12:08.044293+0000 mon.a (mon.0) 318 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:12:09.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:08 vm10 bash[23387]: audit 2026-03-09T21:12:08.049214+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:09.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:08 vm10 bash[23387]: audit 2026-03-09T21:12:08.049214+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:09.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:08 vm10 bash[23387]: audit 2026-03-09T21:12:08.056697+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:09.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:08 vm10 bash[23387]: audit 2026-03-09T21:12:08.056697+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:10.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:10 vm07 bash[20771]: cluster 2026-03-09T21:12:08.588827+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:10.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:10 vm07 bash[20771]: cluster 2026-03-09T21:12:08.588827+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:10.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:10 vm07 bash[28052]: cluster 2026-03-09T21:12:08.588827+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:10.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:10 vm07 bash[28052]: cluster 2026-03-09T21:12:08.588827+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:10.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:10 vm10 bash[23387]: cluster 2026-03-09T21:12:08.588827+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:10.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:10 vm10 bash[23387]: cluster 2026-03-09T21:12:08.588827+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:12.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:12 vm07 bash[20771]: cluster 2026-03-09T21:12:10.589066+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:12.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:12 vm07 bash[20771]: cluster 2026-03-09T21:12:10.589066+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:12.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:12 vm07 bash[28052]: cluster 2026-03-09T21:12:10.589066+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:12.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:12 vm07 bash[28052]: cluster 2026-03-09T21:12:10.589066+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:12.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:12 vm10 bash[23387]: cluster 2026-03-09T21:12:10.589066+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:12.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:12 vm10 bash[23387]: cluster 2026-03-09T21:12:10.589066+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:13.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:13 vm07 bash[20771]: audit 2026-03-09T21:12:12.111235+0000 mon.a (mon.0) 321 : audit [INF] from='osd.1 v2:192.168.123.107:6805/4103893323' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T21:12:13.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:13 vm07 bash[20771]: audit 2026-03-09T21:12:12.111235+0000 mon.a (mon.0) 321 : audit [INF] from='osd.1 v2:192.168.123.107:6805/4103893323' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T21:12:13.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:13 vm07 bash[28052]: audit 2026-03-09T21:12:12.111235+0000 mon.a (mon.0) 321 : audit [INF] from='osd.1 v2:192.168.123.107:6805/4103893323' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T21:12:13.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:13 vm07 bash[28052]: audit 2026-03-09T21:12:12.111235+0000 mon.a (mon.0) 321 : audit [INF] from='osd.1 v2:192.168.123.107:6805/4103893323' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T21:12:13.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:13 vm10 bash[23387]: audit 2026-03-09T21:12:12.111235+0000 mon.a (mon.0) 321 : audit [INF] from='osd.1 v2:192.168.123.107:6805/4103893323' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T21:12:13.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:13 vm10 bash[23387]: audit 2026-03-09T21:12:12.111235+0000 mon.a (mon.0) 321 : audit [INF] from='osd.1 v2:192.168.123.107:6805/4103893323' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T21:12:14.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:14 vm07 bash[20771]: cluster 2026-03-09T21:12:12.589352+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:14.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:14 vm07 bash[20771]: cluster 2026-03-09T21:12:12.589352+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:14.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:14 vm07 bash[20771]: audit 2026-03-09T21:12:13.077471+0000 mon.a (mon.0) 322 : audit [INF] from='osd.1 v2:192.168.123.107:6805/4103893323' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T21:12:14.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:14 vm07 bash[20771]: audit 2026-03-09T21:12:13.077471+0000 mon.a (mon.0) 322 : audit [INF] from='osd.1 v2:192.168.123.107:6805/4103893323' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T21:12:14.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:14 vm07 bash[20771]: cluster 2026-03-09T21:12:13.079615+0000 mon.a (mon.0) 323 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T21:12:14.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:14 vm07 bash[20771]: cluster 2026-03-09T21:12:13.079615+0000 mon.a (mon.0) 323 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T21:12:14.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:14 vm07 bash[20771]: audit 2026-03-09T21:12:13.079753+0000 mon.a (mon.0) 324 : audit [INF] from='osd.1 v2:192.168.123.107:6805/4103893323' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:12:14.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:14 vm07 bash[20771]: audit 2026-03-09T21:12:13.079753+0000 mon.a (mon.0) 324 : audit [INF] from='osd.1 v2:192.168.123.107:6805/4103893323' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:12:14.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:14 vm07 bash[20771]: audit 2026-03-09T21:12:13.079844+0000 mon.a (mon.0) 325 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:12:14.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:14 vm07 bash[20771]: audit 2026-03-09T21:12:13.079844+0000 mon.a (mon.0) 325 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:12:14.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:14 vm07 bash[28052]: cluster 2026-03-09T21:12:12.589352+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:14.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:14 vm07 bash[28052]: cluster 2026-03-09T21:12:12.589352+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:14.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:14 vm07 bash[28052]: audit 2026-03-09T21:12:13.077471+0000 mon.a (mon.0) 322 : audit [INF] from='osd.1 v2:192.168.123.107:6805/4103893323' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T21:12:14.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:14 vm07 bash[28052]: audit 2026-03-09T21:12:13.077471+0000 mon.a (mon.0) 322 : audit [INF] from='osd.1 v2:192.168.123.107:6805/4103893323' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T21:12:14.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:14 vm07 bash[28052]: cluster 2026-03-09T21:12:13.079615+0000 mon.a (mon.0) 323 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T21:12:14.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:14 vm07 bash[28052]: cluster 2026-03-09T21:12:13.079615+0000 mon.a (mon.0) 323 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T21:12:14.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:14 vm07 bash[28052]: audit 2026-03-09T21:12:13.079753+0000 mon.a (mon.0) 324 : audit [INF] from='osd.1 v2:192.168.123.107:6805/4103893323' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:12:14.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:14 vm07 bash[28052]: audit 2026-03-09T21:12:13.079753+0000 mon.a (mon.0) 324 : audit [INF] from='osd.1 v2:192.168.123.107:6805/4103893323' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:12:14.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:14 vm07 bash[28052]: audit 2026-03-09T21:12:13.079844+0000 mon.a (mon.0) 325 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:12:14.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:14 vm07 bash[28052]: audit 2026-03-09T21:12:13.079844+0000 mon.a (mon.0) 325 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:12:14.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:14 vm10 bash[23387]: cluster 2026-03-09T21:12:12.589352+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:14.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:14 vm10 bash[23387]: cluster 2026-03-09T21:12:12.589352+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:14.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:14 vm10 bash[23387]: audit 2026-03-09T21:12:13.077471+0000 mon.a (mon.0) 322 : audit [INF] from='osd.1 v2:192.168.123.107:6805/4103893323' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T21:12:14.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:14 vm10 bash[23387]: audit 2026-03-09T21:12:13.077471+0000 mon.a (mon.0) 322 : audit [INF] from='osd.1 v2:192.168.123.107:6805/4103893323' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T21:12:14.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:14 vm10 bash[23387]: cluster 2026-03-09T21:12:13.079615+0000 mon.a (mon.0) 323 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T21:12:14.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:14 vm10 bash[23387]: cluster 2026-03-09T21:12:13.079615+0000 mon.a (mon.0) 323 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T21:12:14.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:14 vm10 bash[23387]: audit 2026-03-09T21:12:13.079753+0000 mon.a (mon.0) 324 : audit [INF] from='osd.1 v2:192.168.123.107:6805/4103893323' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:12:14.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:14 vm10 bash[23387]: audit 2026-03-09T21:12:13.079753+0000 mon.a (mon.0) 324 : audit [INF] from='osd.1 v2:192.168.123.107:6805/4103893323' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:12:14.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:14 vm10 bash[23387]: audit 2026-03-09T21:12:13.079844+0000 mon.a (mon.0) 325 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:12:14.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:14 vm10 bash[23387]: audit 2026-03-09T21:12:13.079844+0000 mon.a (mon.0) 325 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:12:15.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:15 vm07 bash[20771]: audit 2026-03-09T21:12:14.080318+0000 mon.a (mon.0) 326 : audit [INF] from='osd.1 v2:192.168.123.107:6805/4103893323' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T21:12:15.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:15 vm07 bash[20771]: audit 2026-03-09T21:12:14.080318+0000 mon.a (mon.0) 326 : audit [INF] from='osd.1 v2:192.168.123.107:6805/4103893323' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T21:12:15.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:15 vm07 bash[20771]: cluster 2026-03-09T21:12:14.082670+0000 mon.a (mon.0) 327 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T21:12:15.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:15 vm07 bash[20771]: cluster 2026-03-09T21:12:14.082670+0000 mon.a (mon.0) 327 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T21:12:15.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:15 vm07 bash[20771]: audit 2026-03-09T21:12:14.083699+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:12:15.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:15 vm07 bash[20771]: audit 2026-03-09T21:12:14.083699+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:12:15.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:15 vm07 bash[20771]: audit 2026-03-09T21:12:14.087054+0000 mon.a (mon.0) 329 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:12:15.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:15 vm07 bash[20771]: audit 2026-03-09T21:12:14.087054+0000 mon.a (mon.0) 329 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:12:15.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:15 vm07 bash[20771]: audit 2026-03-09T21:12:14.387863+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:15.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:15 vm07 bash[20771]: audit 2026-03-09T21:12:14.387863+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:15.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:15 vm07 bash[20771]: audit 2026-03-09T21:12:14.396279+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:15.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:15 vm07 bash[20771]: audit 2026-03-09T21:12:14.396279+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:15.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:15 vm07 bash[20771]: audit 2026-03-09T21:12:14.397250+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:15.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:15 vm07 bash[20771]: audit 2026-03-09T21:12:14.397250+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:15.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:15 vm07 bash[20771]: audit 2026-03-09T21:12:14.398005+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:12:15.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:15 vm07 bash[20771]: audit 2026-03-09T21:12:14.398005+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:12:15.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:15 vm07 bash[20771]: audit 2026-03-09T21:12:14.406851+0000 mon.a (mon.0) 334 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:15.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:15 vm07 bash[20771]: audit 2026-03-09T21:12:14.406851+0000 mon.a (mon.0) 334 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:15.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:15 vm07 bash[20771]: audit 2026-03-09T21:12:15.086679+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:12:15.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:15 vm07 bash[20771]: audit 2026-03-09T21:12:15.086679+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:12:15.367 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:15 vm07 bash[28052]: audit 2026-03-09T21:12:14.080318+0000 mon.a (mon.0) 326 : audit [INF] from='osd.1 v2:192.168.123.107:6805/4103893323' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T21:12:15.367 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:15 vm07 bash[28052]: audit 2026-03-09T21:12:14.080318+0000 mon.a (mon.0) 326 : audit [INF] from='osd.1 v2:192.168.123.107:6805/4103893323' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T21:12:15.367 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:15 vm07 bash[28052]: cluster 2026-03-09T21:12:14.082670+0000 mon.a (mon.0) 327 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T21:12:15.367 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:15 vm07 bash[28052]: cluster 2026-03-09T21:12:14.082670+0000 mon.a (mon.0) 327 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T21:12:15.367 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:15 vm07 bash[28052]: audit 2026-03-09T21:12:14.083699+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:12:15.367 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:15 vm07 bash[28052]: audit 2026-03-09T21:12:14.083699+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:12:15.367 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:15 vm07 bash[28052]: audit 2026-03-09T21:12:14.087054+0000 mon.a (mon.0) 329 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:12:15.367 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:15 vm07 bash[28052]: audit 2026-03-09T21:12:14.087054+0000 mon.a (mon.0) 329 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:12:15.367 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:15 vm07 bash[28052]: audit 2026-03-09T21:12:14.387863+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:15.367 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:15 vm07 bash[28052]: audit 2026-03-09T21:12:14.387863+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:15.367 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:15 vm07 bash[28052]: audit 2026-03-09T21:12:14.396279+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:15.367 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:15 vm07 bash[28052]: audit 2026-03-09T21:12:14.396279+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:15.367 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:15 vm07 bash[28052]: audit 2026-03-09T21:12:14.397250+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:15.367 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:15 vm07 bash[28052]: audit 2026-03-09T21:12:14.397250+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:15.367 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:15 vm07 bash[28052]: audit 2026-03-09T21:12:14.398005+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:12:15.367 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:15 vm07 bash[28052]: audit 2026-03-09T21:12:14.398005+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:12:15.367 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:15 vm07 bash[28052]: audit 2026-03-09T21:12:14.406851+0000 mon.a (mon.0) 334 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:15.367 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:15 vm07 bash[28052]: audit 2026-03-09T21:12:14.406851+0000 mon.a (mon.0) 334 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:15.367 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:15 vm07 bash[28052]: audit 2026-03-09T21:12:15.086679+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:12:15.367 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:15 vm07 bash[28052]: audit 2026-03-09T21:12:15.086679+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:12:15.459 INFO:teuthology.orchestra.run.vm07.stdout:Created osd(s) 1 on host 'vm07' 2026-03-09T21:12:15.561 DEBUG:teuthology.orchestra.run.vm07:osd.1> sudo journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@osd.1.service 2026-03-09T21:12:15.562 INFO:tasks.cephadm:Deploying osd.2 on vm07 with /dev/vdc... 2026-03-09T21:12:15.563 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- lvm zap /dev/vdc 2026-03-09T21:12:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:15 vm10 bash[23387]: audit 2026-03-09T21:12:14.080318+0000 mon.a (mon.0) 326 : audit [INF] from='osd.1 v2:192.168.123.107:6805/4103893323' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T21:12:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:15 vm10 bash[23387]: audit 2026-03-09T21:12:14.080318+0000 mon.a (mon.0) 326 : audit [INF] from='osd.1 v2:192.168.123.107:6805/4103893323' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T21:12:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:15 vm10 bash[23387]: cluster 2026-03-09T21:12:14.082670+0000 mon.a (mon.0) 327 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T21:12:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:15 vm10 bash[23387]: cluster 2026-03-09T21:12:14.082670+0000 mon.a (mon.0) 327 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T21:12:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:15 vm10 bash[23387]: audit 2026-03-09T21:12:14.083699+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:12:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:15 vm10 bash[23387]: audit 2026-03-09T21:12:14.083699+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:12:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:15 vm10 bash[23387]: audit 2026-03-09T21:12:14.087054+0000 mon.a (mon.0) 329 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:12:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:15 vm10 bash[23387]: audit 2026-03-09T21:12:14.087054+0000 mon.a (mon.0) 329 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:12:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:15 vm10 bash[23387]: audit 2026-03-09T21:12:14.387863+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:15 vm10 bash[23387]: audit 2026-03-09T21:12:14.387863+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:15 vm10 bash[23387]: audit 2026-03-09T21:12:14.396279+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:15 vm10 bash[23387]: audit 2026-03-09T21:12:14.396279+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:15 vm10 bash[23387]: audit 2026-03-09T21:12:14.397250+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:15 vm10 bash[23387]: audit 2026-03-09T21:12:14.397250+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:15 vm10 bash[23387]: audit 2026-03-09T21:12:14.398005+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:12:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:15 vm10 bash[23387]: audit 2026-03-09T21:12:14.398005+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:12:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:15 vm10 bash[23387]: audit 2026-03-09T21:12:14.406851+0000 mon.a (mon.0) 334 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:15 vm10 bash[23387]: audit 2026-03-09T21:12:14.406851+0000 mon.a (mon.0) 334 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:15 vm10 bash[23387]: audit 2026-03-09T21:12:15.086679+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:12:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:15 vm10 bash[23387]: audit 2026-03-09T21:12:15.086679+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:16 vm07 bash[20771]: cluster 2026-03-09T21:12:13.068545+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:16 vm07 bash[20771]: cluster 2026-03-09T21:12:13.068545+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:16 vm07 bash[20771]: cluster 2026-03-09T21:12:13.068957+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:16 vm07 bash[20771]: cluster 2026-03-09T21:12:13.068957+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:16 vm07 bash[20771]: cluster 2026-03-09T21:12:14.589630+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:16 vm07 bash[20771]: cluster 2026-03-09T21:12:14.589630+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:16 vm07 bash[20771]: cluster 2026-03-09T21:12:15.110290+0000 mon.a (mon.0) 336 : cluster [INF] osd.1 v2:192.168.123.107:6805/4103893323 boot 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:16 vm07 bash[20771]: cluster 2026-03-09T21:12:15.110290+0000 mon.a (mon.0) 336 : cluster [INF] osd.1 v2:192.168.123.107:6805/4103893323 boot 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:16 vm07 bash[20771]: cluster 2026-03-09T21:12:15.110437+0000 mon.a (mon.0) 337 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:16 vm07 bash[20771]: cluster 2026-03-09T21:12:15.110437+0000 mon.a (mon.0) 337 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:16 vm07 bash[20771]: audit 2026-03-09T21:12:15.162187+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:16 vm07 bash[20771]: audit 2026-03-09T21:12:15.162187+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:16 vm07 bash[20771]: audit 2026-03-09T21:12:15.439974+0000 mon.a (mon.0) 339 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:16 vm07 bash[20771]: audit 2026-03-09T21:12:15.439974+0000 mon.a (mon.0) 339 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:16 vm07 bash[20771]: audit 2026-03-09T21:12:15.446886+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:16 vm07 bash[20771]: audit 2026-03-09T21:12:15.446886+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:16 vm07 bash[20771]: audit 2026-03-09T21:12:15.452851+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:16 vm07 bash[20771]: audit 2026-03-09T21:12:15.452851+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:16 vm07 bash[20771]: cluster 2026-03-09T21:12:16.113863+0000 mon.a (mon.0) 342 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:16 vm07 bash[20771]: cluster 2026-03-09T21:12:16.113863+0000 mon.a (mon.0) 342 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:16 vm07 bash[28052]: cluster 2026-03-09T21:12:13.068545+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:16 vm07 bash[28052]: cluster 2026-03-09T21:12:13.068545+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:16 vm07 bash[28052]: cluster 2026-03-09T21:12:13.068957+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:16 vm07 bash[28052]: cluster 2026-03-09T21:12:13.068957+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:16 vm07 bash[28052]: cluster 2026-03-09T21:12:14.589630+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:16 vm07 bash[28052]: cluster 2026-03-09T21:12:14.589630+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:16 vm07 bash[28052]: cluster 2026-03-09T21:12:15.110290+0000 mon.a (mon.0) 336 : cluster [INF] osd.1 v2:192.168.123.107:6805/4103893323 boot 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:16 vm07 bash[28052]: cluster 2026-03-09T21:12:15.110290+0000 mon.a (mon.0) 336 : cluster [INF] osd.1 v2:192.168.123.107:6805/4103893323 boot 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:16 vm07 bash[28052]: cluster 2026-03-09T21:12:15.110437+0000 mon.a (mon.0) 337 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:16 vm07 bash[28052]: cluster 2026-03-09T21:12:15.110437+0000 mon.a (mon.0) 337 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:16 vm07 bash[28052]: audit 2026-03-09T21:12:15.162187+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:16 vm07 bash[28052]: audit 2026-03-09T21:12:15.162187+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:16 vm07 bash[28052]: audit 2026-03-09T21:12:15.439974+0000 mon.a (mon.0) 339 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:16 vm07 bash[28052]: audit 2026-03-09T21:12:15.439974+0000 mon.a (mon.0) 339 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:16 vm07 bash[28052]: audit 2026-03-09T21:12:15.446886+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:16.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:16 vm07 bash[28052]: audit 2026-03-09T21:12:15.446886+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:16.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:16 vm07 bash[28052]: audit 2026-03-09T21:12:15.452851+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:16.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:16 vm07 bash[28052]: audit 2026-03-09T21:12:15.452851+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:16.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:16 vm07 bash[28052]: cluster 2026-03-09T21:12:16.113863+0000 mon.a (mon.0) 342 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T21:12:16.617 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:16 vm07 bash[28052]: cluster 2026-03-09T21:12:16.113863+0000 mon.a (mon.0) 342 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T21:12:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:16 vm10 bash[23387]: cluster 2026-03-09T21:12:13.068545+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:12:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:16 vm10 bash[23387]: cluster 2026-03-09T21:12:13.068545+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:12:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:16 vm10 bash[23387]: cluster 2026-03-09T21:12:13.068957+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:12:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:16 vm10 bash[23387]: cluster 2026-03-09T21:12:13.068957+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:12:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:16 vm10 bash[23387]: cluster 2026-03-09T21:12:14.589630+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:16 vm10 bash[23387]: cluster 2026-03-09T21:12:14.589630+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T21:12:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:16 vm10 bash[23387]: cluster 2026-03-09T21:12:15.110290+0000 mon.a (mon.0) 336 : cluster [INF] osd.1 v2:192.168.123.107:6805/4103893323 boot 2026-03-09T21:12:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:16 vm10 bash[23387]: cluster 2026-03-09T21:12:15.110290+0000 mon.a (mon.0) 336 : cluster [INF] osd.1 v2:192.168.123.107:6805/4103893323 boot 2026-03-09T21:12:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:16 vm10 bash[23387]: cluster 2026-03-09T21:12:15.110437+0000 mon.a (mon.0) 337 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T21:12:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:16 vm10 bash[23387]: cluster 2026-03-09T21:12:15.110437+0000 mon.a (mon.0) 337 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T21:12:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:16 vm10 bash[23387]: audit 2026-03-09T21:12:15.162187+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:12:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:16 vm10 bash[23387]: audit 2026-03-09T21:12:15.162187+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:12:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:16 vm10 bash[23387]: audit 2026-03-09T21:12:15.439974+0000 mon.a (mon.0) 339 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:12:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:16 vm10 bash[23387]: audit 2026-03-09T21:12:15.439974+0000 mon.a (mon.0) 339 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:12:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:16 vm10 bash[23387]: audit 2026-03-09T21:12:15.446886+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:16 vm10 bash[23387]: audit 2026-03-09T21:12:15.446886+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:16 vm10 bash[23387]: audit 2026-03-09T21:12:15.452851+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:16 vm10 bash[23387]: audit 2026-03-09T21:12:15.452851+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:16 vm10 bash[23387]: cluster 2026-03-09T21:12:16.113863+0000 mon.a (mon.0) 342 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T21:12:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:16 vm10 bash[23387]: cluster 2026-03-09T21:12:16.113863+0000 mon.a (mon.0) 342 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T21:12:18.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:18 vm07 bash[20771]: cluster 2026-03-09T21:12:16.589858+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:18.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:18 vm07 bash[20771]: cluster 2026-03-09T21:12:16.589858+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:18.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:18 vm07 bash[28052]: cluster 2026-03-09T21:12:16.589858+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:18.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:18 vm07 bash[28052]: cluster 2026-03-09T21:12:16.589858+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:18 vm10 bash[23387]: cluster 2026-03-09T21:12:16.589858+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:18 vm10 bash[23387]: cluster 2026-03-09T21:12:16.589858+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:20.227 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:12:20.469 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:20 vm07 bash[20771]: cluster 2026-03-09T21:12:18.590204+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:20.469 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:20 vm07 bash[20771]: cluster 2026-03-09T21:12:18.590204+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:20.469 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:20 vm07 bash[28052]: cluster 2026-03-09T21:12:18.590204+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:20.469 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:20 vm07 bash[28052]: cluster 2026-03-09T21:12:18.590204+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:20.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:20 vm10 bash[23387]: cluster 2026-03-09T21:12:18.590204+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:20.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:20 vm10 bash[23387]: cluster 2026-03-09T21:12:18.590204+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:21.858 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:12:21.875 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph orch daemon add osd vm07:/dev/vdc 2026-03-09T21:12:22.117 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:22 vm07 bash[20771]: cluster 2026-03-09T21:12:20.590430+0000 mgr.y (mgr.14150) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:22.117 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:22 vm07 bash[20771]: cluster 2026-03-09T21:12:20.590430+0000 mgr.y (mgr.14150) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:22.117 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:22 vm07 bash[20771]: cephadm 2026-03-09T21:12:21.105843+0000 mgr.y (mgr.14150) 101 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T21:12:22.117 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:22 vm07 bash[20771]: cephadm 2026-03-09T21:12:21.105843+0000 mgr.y (mgr.14150) 101 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T21:12:22.117 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:22 vm07 bash[20771]: audit 2026-03-09T21:12:21.111500+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:22.117 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:22 vm07 bash[20771]: audit 2026-03-09T21:12:21.111500+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:22.117 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:22 vm07 bash[20771]: audit 2026-03-09T21:12:21.125682+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:22.117 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:22 vm07 bash[20771]: audit 2026-03-09T21:12:21.125682+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:22.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:22 vm07 bash[20771]: audit 2026-03-09T21:12:21.129578+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:12:22.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:22 vm07 bash[20771]: audit 2026-03-09T21:12:21.129578+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:12:22.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:22 vm07 bash[20771]: audit 2026-03-09T21:12:21.130940+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:22.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:22 vm07 bash[20771]: audit 2026-03-09T21:12:21.130940+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:22.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:22 vm07 bash[20771]: audit 2026-03-09T21:12:21.131518+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:12:22.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:22 vm07 bash[20771]: audit 2026-03-09T21:12:21.131518+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:12:22.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:22 vm07 bash[20771]: audit 2026-03-09T21:12:21.139523+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:22.118 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:22 vm07 bash[20771]: audit 2026-03-09T21:12:21.139523+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:22.118 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:22 vm07 bash[28052]: cluster 2026-03-09T21:12:20.590430+0000 mgr.y (mgr.14150) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:22.118 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:22 vm07 bash[28052]: cluster 2026-03-09T21:12:20.590430+0000 mgr.y (mgr.14150) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:22.118 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:22 vm07 bash[28052]: cephadm 2026-03-09T21:12:21.105843+0000 mgr.y (mgr.14150) 101 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T21:12:22.118 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:22 vm07 bash[28052]: cephadm 2026-03-09T21:12:21.105843+0000 mgr.y (mgr.14150) 101 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T21:12:22.118 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:22 vm07 bash[28052]: audit 2026-03-09T21:12:21.111500+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:22.118 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:22 vm07 bash[28052]: audit 2026-03-09T21:12:21.111500+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:22.118 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:22 vm07 bash[28052]: audit 2026-03-09T21:12:21.125682+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:22.118 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:22 vm07 bash[28052]: audit 2026-03-09T21:12:21.125682+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:22.118 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:22 vm07 bash[28052]: audit 2026-03-09T21:12:21.129578+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:12:22.118 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:22 vm07 bash[28052]: audit 2026-03-09T21:12:21.129578+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:12:22.118 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:22 vm07 bash[28052]: audit 2026-03-09T21:12:21.130940+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:22.118 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:22 vm07 bash[28052]: audit 2026-03-09T21:12:21.130940+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:22.118 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:22 vm07 bash[28052]: audit 2026-03-09T21:12:21.131518+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:12:22.118 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:22 vm07 bash[28052]: audit 2026-03-09T21:12:21.131518+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:12:22.118 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:22 vm07 bash[28052]: audit 2026-03-09T21:12:21.139523+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:22.118 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:22 vm07 bash[28052]: audit 2026-03-09T21:12:21.139523+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:22.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:22 vm10 bash[23387]: cluster 2026-03-09T21:12:20.590430+0000 mgr.y (mgr.14150) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:22.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:22 vm10 bash[23387]: cluster 2026-03-09T21:12:20.590430+0000 mgr.y (mgr.14150) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:22.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:22 vm10 bash[23387]: cephadm 2026-03-09T21:12:21.105843+0000 mgr.y (mgr.14150) 101 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T21:12:22.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:22 vm10 bash[23387]: cephadm 2026-03-09T21:12:21.105843+0000 mgr.y (mgr.14150) 101 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T21:12:22.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:22 vm10 bash[23387]: audit 2026-03-09T21:12:21.111500+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:22.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:22 vm10 bash[23387]: audit 2026-03-09T21:12:21.111500+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:22.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:22 vm10 bash[23387]: audit 2026-03-09T21:12:21.125682+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:22.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:22 vm10 bash[23387]: audit 2026-03-09T21:12:21.125682+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:22.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:22 vm10 bash[23387]: audit 2026-03-09T21:12:21.129578+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:12:22.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:22 vm10 bash[23387]: audit 2026-03-09T21:12:21.129578+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:12:22.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:22 vm10 bash[23387]: audit 2026-03-09T21:12:21.130940+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:22.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:22 vm10 bash[23387]: audit 2026-03-09T21:12:21.130940+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:22.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:22 vm10 bash[23387]: audit 2026-03-09T21:12:21.131518+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:12:22.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:22 vm10 bash[23387]: audit 2026-03-09T21:12:21.131518+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:12:22.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:22 vm10 bash[23387]: audit 2026-03-09T21:12:21.139523+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:22.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:22 vm10 bash[23387]: audit 2026-03-09T21:12:21.139523+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:24.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:24 vm07 bash[20771]: cluster 2026-03-09T21:12:22.590659+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:24.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:24 vm07 bash[20771]: cluster 2026-03-09T21:12:22.590659+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:24.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:24 vm07 bash[28052]: cluster 2026-03-09T21:12:22.590659+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:24.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:24 vm07 bash[28052]: cluster 2026-03-09T21:12:22.590659+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:24 vm10 bash[23387]: cluster 2026-03-09T21:12:22.590659+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:24 vm10 bash[23387]: cluster 2026-03-09T21:12:22.590659+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:26.505 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:12:26.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:26 vm07 bash[20771]: cluster 2026-03-09T21:12:24.590874+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:26.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:26 vm07 bash[20771]: cluster 2026-03-09T21:12:24.590874+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:26.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:26 vm07 bash[28052]: cluster 2026-03-09T21:12:24.590874+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:26.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:26 vm07 bash[28052]: cluster 2026-03-09T21:12:24.590874+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:26 vm10 bash[23387]: cluster 2026-03-09T21:12:24.590874+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:26 vm10 bash[23387]: cluster 2026-03-09T21:12:24.590874+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:27.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:27 vm07 bash[20771]: audit 2026-03-09T21:12:26.778706+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:12:27.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:27 vm07 bash[20771]: audit 2026-03-09T21:12:26.778706+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:12:27.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:27 vm07 bash[20771]: audit 2026-03-09T21:12:26.780200+0000 mon.a (mon.0) 350 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:12:27.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:27 vm07 bash[20771]: audit 2026-03-09T21:12:26.780200+0000 mon.a (mon.0) 350 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:12:27.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:27 vm07 bash[20771]: audit 2026-03-09T21:12:26.780666+0000 mon.a (mon.0) 351 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:27.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:27 vm07 bash[20771]: audit 2026-03-09T21:12:26.780666+0000 mon.a (mon.0) 351 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:27.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:27 vm07 bash[28052]: audit 2026-03-09T21:12:26.778706+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:12:27.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:27 vm07 bash[28052]: audit 2026-03-09T21:12:26.778706+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:12:27.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:27 vm07 bash[28052]: audit 2026-03-09T21:12:26.780200+0000 mon.a (mon.0) 350 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:12:27.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:27 vm07 bash[28052]: audit 2026-03-09T21:12:26.780200+0000 mon.a (mon.0) 350 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:12:27.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:27 vm07 bash[28052]: audit 2026-03-09T21:12:26.780666+0000 mon.a (mon.0) 351 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:27.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:27 vm07 bash[28052]: audit 2026-03-09T21:12:26.780666+0000 mon.a (mon.0) 351 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:27 vm10 bash[23387]: audit 2026-03-09T21:12:26.778706+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:12:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:27 vm10 bash[23387]: audit 2026-03-09T21:12:26.778706+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:12:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:27 vm10 bash[23387]: audit 2026-03-09T21:12:26.780200+0000 mon.a (mon.0) 350 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:12:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:27 vm10 bash[23387]: audit 2026-03-09T21:12:26.780200+0000 mon.a (mon.0) 350 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:12:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:27 vm10 bash[23387]: audit 2026-03-09T21:12:26.780666+0000 mon.a (mon.0) 351 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:27 vm10 bash[23387]: audit 2026-03-09T21:12:26.780666+0000 mon.a (mon.0) 351 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:28.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:28 vm10 bash[23387]: cluster 2026-03-09T21:12:26.591152+0000 mgr.y (mgr.14150) 104 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:28.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:28 vm10 bash[23387]: cluster 2026-03-09T21:12:26.591152+0000 mgr.y (mgr.14150) 104 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:28.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:28 vm10 bash[23387]: audit 2026-03-09T21:12:26.776833+0000 mgr.y (mgr.14150) 105 : audit [DBG] from='client.24149 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:12:28.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:28 vm10 bash[23387]: audit 2026-03-09T21:12:26.776833+0000 mgr.y (mgr.14150) 105 : audit [DBG] from='client.24149 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:12:28.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:28 vm07 bash[20771]: cluster 2026-03-09T21:12:26.591152+0000 mgr.y (mgr.14150) 104 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:28.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:28 vm07 bash[20771]: cluster 2026-03-09T21:12:26.591152+0000 mgr.y (mgr.14150) 104 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:28.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:28 vm07 bash[20771]: audit 2026-03-09T21:12:26.776833+0000 mgr.y (mgr.14150) 105 : audit [DBG] from='client.24149 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:12:28.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:28 vm07 bash[20771]: audit 2026-03-09T21:12:26.776833+0000 mgr.y (mgr.14150) 105 : audit [DBG] from='client.24149 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:12:28.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:28 vm07 bash[28052]: cluster 2026-03-09T21:12:26.591152+0000 mgr.y (mgr.14150) 104 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:28.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:28 vm07 bash[28052]: cluster 2026-03-09T21:12:26.591152+0000 mgr.y (mgr.14150) 104 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:28.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:28 vm07 bash[28052]: audit 2026-03-09T21:12:26.776833+0000 mgr.y (mgr.14150) 105 : audit [DBG] from='client.24149 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:12:28.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:28 vm07 bash[28052]: audit 2026-03-09T21:12:26.776833+0000 mgr.y (mgr.14150) 105 : audit [DBG] from='client.24149 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:12:29.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:29 vm07 bash[20771]: cluster 2026-03-09T21:12:28.591424+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:29.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:29 vm07 bash[20771]: cluster 2026-03-09T21:12:28.591424+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:29.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:29 vm07 bash[28052]: cluster 2026-03-09T21:12:28.591424+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:29.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:29 vm07 bash[28052]: cluster 2026-03-09T21:12:28.591424+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:29.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:29 vm10 bash[23387]: cluster 2026-03-09T21:12:28.591424+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:29.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:29 vm10 bash[23387]: cluster 2026-03-09T21:12:28.591424+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:31.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:31 vm07 bash[20771]: cluster 2026-03-09T21:12:30.591633+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:31.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:31 vm07 bash[20771]: cluster 2026-03-09T21:12:30.591633+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:31.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:31 vm07 bash[28052]: cluster 2026-03-09T21:12:30.591633+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:31.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:31 vm07 bash[28052]: cluster 2026-03-09T21:12:30.591633+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:31.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:31 vm10 bash[23387]: cluster 2026-03-09T21:12:30.591633+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:31.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:31 vm10 bash[23387]: cluster 2026-03-09T21:12:30.591633+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:32.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:32 vm07 bash[20771]: audit 2026-03-09T21:12:32.211889+0000 mon.a (mon.0) 352 : audit [INF] from='client.? 192.168.123.107:0/2895788452' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4a040af0-0bb5-4407-ba5f-64091d0e0685"}]: dispatch 2026-03-09T21:12:32.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:32 vm07 bash[20771]: audit 2026-03-09T21:12:32.211889+0000 mon.a (mon.0) 352 : audit [INF] from='client.? 192.168.123.107:0/2895788452' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4a040af0-0bb5-4407-ba5f-64091d0e0685"}]: dispatch 2026-03-09T21:12:32.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:32 vm07 bash[20771]: audit 2026-03-09T21:12:32.215199+0000 mon.a (mon.0) 353 : audit [INF] from='client.? 192.168.123.107:0/2895788452' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4a040af0-0bb5-4407-ba5f-64091d0e0685"}]': finished 2026-03-09T21:12:32.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:32 vm07 bash[20771]: audit 2026-03-09T21:12:32.215199+0000 mon.a (mon.0) 353 : audit [INF] from='client.? 192.168.123.107:0/2895788452' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4a040af0-0bb5-4407-ba5f-64091d0e0685"}]': finished 2026-03-09T21:12:32.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:32 vm07 bash[20771]: cluster 2026-03-09T21:12:32.218115+0000 mon.a (mon.0) 354 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T21:12:32.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:32 vm07 bash[20771]: cluster 2026-03-09T21:12:32.218115+0000 mon.a (mon.0) 354 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T21:12:32.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:32 vm07 bash[20771]: audit 2026-03-09T21:12:32.218268+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:32.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:32 vm07 bash[20771]: audit 2026-03-09T21:12:32.218268+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:32.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:32 vm07 bash[28052]: audit 2026-03-09T21:12:32.211889+0000 mon.a (mon.0) 352 : audit [INF] from='client.? 192.168.123.107:0/2895788452' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4a040af0-0bb5-4407-ba5f-64091d0e0685"}]: dispatch 2026-03-09T21:12:32.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:32 vm07 bash[28052]: audit 2026-03-09T21:12:32.211889+0000 mon.a (mon.0) 352 : audit [INF] from='client.? 192.168.123.107:0/2895788452' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4a040af0-0bb5-4407-ba5f-64091d0e0685"}]: dispatch 2026-03-09T21:12:32.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:32 vm07 bash[28052]: audit 2026-03-09T21:12:32.215199+0000 mon.a (mon.0) 353 : audit [INF] from='client.? 192.168.123.107:0/2895788452' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4a040af0-0bb5-4407-ba5f-64091d0e0685"}]': finished 2026-03-09T21:12:32.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:32 vm07 bash[28052]: audit 2026-03-09T21:12:32.215199+0000 mon.a (mon.0) 353 : audit [INF] from='client.? 192.168.123.107:0/2895788452' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4a040af0-0bb5-4407-ba5f-64091d0e0685"}]': finished 2026-03-09T21:12:32.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:32 vm07 bash[28052]: cluster 2026-03-09T21:12:32.218115+0000 mon.a (mon.0) 354 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T21:12:32.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:32 vm07 bash[28052]: cluster 2026-03-09T21:12:32.218115+0000 mon.a (mon.0) 354 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T21:12:32.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:32 vm07 bash[28052]: audit 2026-03-09T21:12:32.218268+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:32.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:32 vm07 bash[28052]: audit 2026-03-09T21:12:32.218268+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:32 vm10 bash[23387]: audit 2026-03-09T21:12:32.211889+0000 mon.a (mon.0) 352 : audit [INF] from='client.? 192.168.123.107:0/2895788452' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4a040af0-0bb5-4407-ba5f-64091d0e0685"}]: dispatch 2026-03-09T21:12:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:32 vm10 bash[23387]: audit 2026-03-09T21:12:32.211889+0000 mon.a (mon.0) 352 : audit [INF] from='client.? 192.168.123.107:0/2895788452' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4a040af0-0bb5-4407-ba5f-64091d0e0685"}]: dispatch 2026-03-09T21:12:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:32 vm10 bash[23387]: audit 2026-03-09T21:12:32.215199+0000 mon.a (mon.0) 353 : audit [INF] from='client.? 192.168.123.107:0/2895788452' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4a040af0-0bb5-4407-ba5f-64091d0e0685"}]': finished 2026-03-09T21:12:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:32 vm10 bash[23387]: audit 2026-03-09T21:12:32.215199+0000 mon.a (mon.0) 353 : audit [INF] from='client.? 192.168.123.107:0/2895788452' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4a040af0-0bb5-4407-ba5f-64091d0e0685"}]': finished 2026-03-09T21:12:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:32 vm10 bash[23387]: cluster 2026-03-09T21:12:32.218115+0000 mon.a (mon.0) 354 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T21:12:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:32 vm10 bash[23387]: cluster 2026-03-09T21:12:32.218115+0000 mon.a (mon.0) 354 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T21:12:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:32 vm10 bash[23387]: audit 2026-03-09T21:12:32.218268+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:32 vm10 bash[23387]: audit 2026-03-09T21:12:32.218268+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:33.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:33 vm10 bash[23387]: cluster 2026-03-09T21:12:32.591867+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:33.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:33 vm10 bash[23387]: cluster 2026-03-09T21:12:32.591867+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:33.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:33 vm10 bash[23387]: audit 2026-03-09T21:12:32.889456+0000 mon.c (mon.2) 6 : audit [DBG] from='client.? 192.168.123.107:0/3424283836' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:12:33.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:33 vm10 bash[23387]: audit 2026-03-09T21:12:32.889456+0000 mon.c (mon.2) 6 : audit [DBG] from='client.? 192.168.123.107:0/3424283836' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:12:34.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:33 vm07 bash[20771]: cluster 2026-03-09T21:12:32.591867+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:34.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:33 vm07 bash[20771]: cluster 2026-03-09T21:12:32.591867+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:34.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:33 vm07 bash[20771]: audit 2026-03-09T21:12:32.889456+0000 mon.c (mon.2) 6 : audit [DBG] from='client.? 192.168.123.107:0/3424283836' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:12:34.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:33 vm07 bash[20771]: audit 2026-03-09T21:12:32.889456+0000 mon.c (mon.2) 6 : audit [DBG] from='client.? 192.168.123.107:0/3424283836' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:12:34.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:33 vm07 bash[28052]: cluster 2026-03-09T21:12:32.591867+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:34.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:33 vm07 bash[28052]: cluster 2026-03-09T21:12:32.591867+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:34.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:33 vm07 bash[28052]: audit 2026-03-09T21:12:32.889456+0000 mon.c (mon.2) 6 : audit [DBG] from='client.? 192.168.123.107:0/3424283836' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:12:34.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:33 vm07 bash[28052]: audit 2026-03-09T21:12:32.889456+0000 mon.c (mon.2) 6 : audit [DBG] from='client.? 192.168.123.107:0/3424283836' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:12:35.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:35 vm10 bash[23387]: cluster 2026-03-09T21:12:34.592166+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:35.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:35 vm10 bash[23387]: cluster 2026-03-09T21:12:34.592166+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:36.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:35 vm07 bash[20771]: cluster 2026-03-09T21:12:34.592166+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:36.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:35 vm07 bash[20771]: cluster 2026-03-09T21:12:34.592166+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:36.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:35 vm07 bash[28052]: cluster 2026-03-09T21:12:34.592166+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:36.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:35 vm07 bash[28052]: cluster 2026-03-09T21:12:34.592166+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:37.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:37 vm10 bash[23387]: cluster 2026-03-09T21:12:36.592728+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:37.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:37 vm10 bash[23387]: cluster 2026-03-09T21:12:36.592728+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:38.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:37 vm07 bash[20771]: cluster 2026-03-09T21:12:36.592728+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:38.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:37 vm07 bash[20771]: cluster 2026-03-09T21:12:36.592728+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:38.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:37 vm07 bash[28052]: cluster 2026-03-09T21:12:36.592728+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:38.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:37 vm07 bash[28052]: cluster 2026-03-09T21:12:36.592728+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:40.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:39 vm07 bash[20771]: cluster 2026-03-09T21:12:38.592997+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:40.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:39 vm07 bash[20771]: cluster 2026-03-09T21:12:38.592997+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:40.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:39 vm07 bash[28052]: cluster 2026-03-09T21:12:38.592997+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:40.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:39 vm07 bash[28052]: cluster 2026-03-09T21:12:38.592997+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:40.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:39 vm10 bash[23387]: cluster 2026-03-09T21:12:38.592997+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:40.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:39 vm10 bash[23387]: cluster 2026-03-09T21:12:38.592997+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:42.032 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:41 vm07 bash[20771]: cluster 2026-03-09T21:12:40.593205+0000 mgr.y (mgr.14150) 112 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:42.032 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:41 vm07 bash[20771]: cluster 2026-03-09T21:12:40.593205+0000 mgr.y (mgr.14150) 112 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:42.032 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:41 vm07 bash[20771]: audit 2026-03-09T21:12:41.685604+0000 mon.a (mon.0) 356 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T21:12:42.032 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:41 vm07 bash[20771]: audit 2026-03-09T21:12:41.685604+0000 mon.a (mon.0) 356 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T21:12:42.032 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:41 vm07 bash[20771]: audit 2026-03-09T21:12:41.686567+0000 mon.a (mon.0) 357 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:42.032 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:41 vm07 bash[20771]: audit 2026-03-09T21:12:41.686567+0000 mon.a (mon.0) 357 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:42.032 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:41 vm07 bash[28052]: cluster 2026-03-09T21:12:40.593205+0000 mgr.y (mgr.14150) 112 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:42.032 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:41 vm07 bash[28052]: cluster 2026-03-09T21:12:40.593205+0000 mgr.y (mgr.14150) 112 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:42.032 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:41 vm07 bash[28052]: audit 2026-03-09T21:12:41.685604+0000 mon.a (mon.0) 356 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T21:12:42.032 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:41 vm07 bash[28052]: audit 2026-03-09T21:12:41.685604+0000 mon.a (mon.0) 356 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T21:12:42.032 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:41 vm07 bash[28052]: audit 2026-03-09T21:12:41.686567+0000 mon.a (mon.0) 357 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:42.032 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:41 vm07 bash[28052]: audit 2026-03-09T21:12:41.686567+0000 mon.a (mon.0) 357 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:42.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:41 vm10 bash[23387]: cluster 2026-03-09T21:12:40.593205+0000 mgr.y (mgr.14150) 112 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:42.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:41 vm10 bash[23387]: cluster 2026-03-09T21:12:40.593205+0000 mgr.y (mgr.14150) 112 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:42.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:41 vm10 bash[23387]: audit 2026-03-09T21:12:41.685604+0000 mon.a (mon.0) 356 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T21:12:42.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:41 vm10 bash[23387]: audit 2026-03-09T21:12:41.685604+0000 mon.a (mon.0) 356 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T21:12:42.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:41 vm10 bash[23387]: audit 2026-03-09T21:12:41.686567+0000 mon.a (mon.0) 357 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:42.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:41 vm10 bash[23387]: audit 2026-03-09T21:12:41.686567+0000 mon.a (mon.0) 357 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:42.545 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:42 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:12:42.546 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:12:42 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:12:42.546 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:42 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:12:42.546 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:12:42 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:12:42.546 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:12:42 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:12:42.871 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:42 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:12:42.872 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:42 vm07 bash[20771]: cephadm 2026-03-09T21:12:41.687200+0000 mgr.y (mgr.14150) 113 : cephadm [INF] Deploying daemon osd.2 on vm07 2026-03-09T21:12:42.872 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:42 vm07 bash[20771]: cephadm 2026-03-09T21:12:41.687200+0000 mgr.y (mgr.14150) 113 : cephadm [INF] Deploying daemon osd.2 on vm07 2026-03-09T21:12:42.872 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:12:42 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:12:42.872 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:12:42 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:12:42.872 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:42 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:12:42.872 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:42 vm07 bash[28052]: cephadm 2026-03-09T21:12:41.687200+0000 mgr.y (mgr.14150) 113 : cephadm [INF] Deploying daemon osd.2 on vm07 2026-03-09T21:12:42.872 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:42 vm07 bash[28052]: cephadm 2026-03-09T21:12:41.687200+0000 mgr.y (mgr.14150) 113 : cephadm [INF] Deploying daemon osd.2 on vm07 2026-03-09T21:12:42.872 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:12:42 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:12:43.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:42 vm10 bash[23387]: cephadm 2026-03-09T21:12:41.687200+0000 mgr.y (mgr.14150) 113 : cephadm [INF] Deploying daemon osd.2 on vm07 2026-03-09T21:12:43.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:42 vm10 bash[23387]: cephadm 2026-03-09T21:12:41.687200+0000 mgr.y (mgr.14150) 113 : cephadm [INF] Deploying daemon osd.2 on vm07 2026-03-09T21:12:44.094 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:43 vm07 bash[20771]: cluster 2026-03-09T21:12:42.593459+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:44.094 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:43 vm07 bash[20771]: cluster 2026-03-09T21:12:42.593459+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:44.094 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:43 vm07 bash[20771]: audit 2026-03-09T21:12:42.796830+0000 mon.a (mon.0) 358 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:12:44.094 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:43 vm07 bash[20771]: audit 2026-03-09T21:12:42.796830+0000 mon.a (mon.0) 358 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:12:44.094 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:43 vm07 bash[20771]: audit 2026-03-09T21:12:42.803166+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:44.095 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:43 vm07 bash[20771]: audit 2026-03-09T21:12:42.803166+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:44.095 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:43 vm07 bash[20771]: audit 2026-03-09T21:12:42.809752+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:44.095 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:43 vm07 bash[20771]: audit 2026-03-09T21:12:42.809752+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:44.095 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:43 vm07 bash[28052]: cluster 2026-03-09T21:12:42.593459+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:44.095 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:43 vm07 bash[28052]: cluster 2026-03-09T21:12:42.593459+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:44.095 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:43 vm07 bash[28052]: audit 2026-03-09T21:12:42.796830+0000 mon.a (mon.0) 358 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:12:44.095 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:43 vm07 bash[28052]: audit 2026-03-09T21:12:42.796830+0000 mon.a (mon.0) 358 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:12:44.095 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:43 vm07 bash[28052]: audit 2026-03-09T21:12:42.803166+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:44.095 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:43 vm07 bash[28052]: audit 2026-03-09T21:12:42.803166+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:44.095 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:43 vm07 bash[28052]: audit 2026-03-09T21:12:42.809752+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:44.095 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:43 vm07 bash[28052]: audit 2026-03-09T21:12:42.809752+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:44.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:43 vm10 bash[23387]: cluster 2026-03-09T21:12:42.593459+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:44.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:43 vm10 bash[23387]: cluster 2026-03-09T21:12:42.593459+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:44.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:43 vm10 bash[23387]: audit 2026-03-09T21:12:42.796830+0000 mon.a (mon.0) 358 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:12:44.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:43 vm10 bash[23387]: audit 2026-03-09T21:12:42.796830+0000 mon.a (mon.0) 358 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:12:44.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:43 vm10 bash[23387]: audit 2026-03-09T21:12:42.803166+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:44.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:43 vm10 bash[23387]: audit 2026-03-09T21:12:42.803166+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:44.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:43 vm10 bash[23387]: audit 2026-03-09T21:12:42.809752+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:44.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:43 vm10 bash[23387]: audit 2026-03-09T21:12:42.809752+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:46.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:45 vm07 bash[20771]: cluster 2026-03-09T21:12:44.593679+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:46.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:45 vm07 bash[20771]: cluster 2026-03-09T21:12:44.593679+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:46.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:45 vm07 bash[28052]: cluster 2026-03-09T21:12:44.593679+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:46.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:45 vm07 bash[28052]: cluster 2026-03-09T21:12:44.593679+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:46.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:45 vm10 bash[23387]: cluster 2026-03-09T21:12:44.593679+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:46.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:45 vm10 bash[23387]: cluster 2026-03-09T21:12:44.593679+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:47.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:46 vm07 bash[20771]: audit 2026-03-09T21:12:46.224019+0000 mon.a (mon.0) 361 : audit [INF] from='osd.2 v2:192.168.123.107:6809/2553486713' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T21:12:47.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:46 vm07 bash[20771]: audit 2026-03-09T21:12:46.224019+0000 mon.a (mon.0) 361 : audit [INF] from='osd.2 v2:192.168.123.107:6809/2553486713' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T21:12:47.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:46 vm07 bash[28052]: audit 2026-03-09T21:12:46.224019+0000 mon.a (mon.0) 361 : audit [INF] from='osd.2 v2:192.168.123.107:6809/2553486713' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T21:12:47.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:46 vm07 bash[28052]: audit 2026-03-09T21:12:46.224019+0000 mon.a (mon.0) 361 : audit [INF] from='osd.2 v2:192.168.123.107:6809/2553486713' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T21:12:47.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:46 vm10 bash[23387]: audit 2026-03-09T21:12:46.224019+0000 mon.a (mon.0) 361 : audit [INF] from='osd.2 v2:192.168.123.107:6809/2553486713' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T21:12:47.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:46 vm10 bash[23387]: audit 2026-03-09T21:12:46.224019+0000 mon.a (mon.0) 361 : audit [INF] from='osd.2 v2:192.168.123.107:6809/2553486713' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T21:12:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:47 vm10 bash[23387]: cluster 2026-03-09T21:12:46.594030+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:47 vm10 bash[23387]: cluster 2026-03-09T21:12:46.594030+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:47 vm10 bash[23387]: audit 2026-03-09T21:12:46.848581+0000 mon.a (mon.0) 362 : audit [INF] from='osd.2 v2:192.168.123.107:6809/2553486713' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T21:12:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:47 vm10 bash[23387]: audit 2026-03-09T21:12:46.848581+0000 mon.a (mon.0) 362 : audit [INF] from='osd.2 v2:192.168.123.107:6809/2553486713' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T21:12:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:47 vm10 bash[23387]: cluster 2026-03-09T21:12:46.852487+0000 mon.a (mon.0) 363 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T21:12:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:47 vm10 bash[23387]: cluster 2026-03-09T21:12:46.852487+0000 mon.a (mon.0) 363 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T21:12:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:47 vm10 bash[23387]: audit 2026-03-09T21:12:46.852670+0000 mon.a (mon.0) 364 : audit [INF] from='osd.2 v2:192.168.123.107:6809/2553486713' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:12:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:47 vm10 bash[23387]: audit 2026-03-09T21:12:46.852670+0000 mon.a (mon.0) 364 : audit [INF] from='osd.2 v2:192.168.123.107:6809/2553486713' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:12:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:47 vm10 bash[23387]: audit 2026-03-09T21:12:46.852783+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:47 vm10 bash[23387]: audit 2026-03-09T21:12:46.852783+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:48.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:47 vm07 bash[20771]: cluster 2026-03-09T21:12:46.594030+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:48.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:47 vm07 bash[20771]: cluster 2026-03-09T21:12:46.594030+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:48.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:47 vm07 bash[20771]: audit 2026-03-09T21:12:46.848581+0000 mon.a (mon.0) 362 : audit [INF] from='osd.2 v2:192.168.123.107:6809/2553486713' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T21:12:48.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:47 vm07 bash[20771]: audit 2026-03-09T21:12:46.848581+0000 mon.a (mon.0) 362 : audit [INF] from='osd.2 v2:192.168.123.107:6809/2553486713' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T21:12:48.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:47 vm07 bash[20771]: cluster 2026-03-09T21:12:46.852487+0000 mon.a (mon.0) 363 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T21:12:48.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:47 vm07 bash[20771]: cluster 2026-03-09T21:12:46.852487+0000 mon.a (mon.0) 363 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T21:12:48.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:47 vm07 bash[20771]: audit 2026-03-09T21:12:46.852670+0000 mon.a (mon.0) 364 : audit [INF] from='osd.2 v2:192.168.123.107:6809/2553486713' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:12:48.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:47 vm07 bash[20771]: audit 2026-03-09T21:12:46.852670+0000 mon.a (mon.0) 364 : audit [INF] from='osd.2 v2:192.168.123.107:6809/2553486713' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:12:48.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:47 vm07 bash[20771]: audit 2026-03-09T21:12:46.852783+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:48.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:47 vm07 bash[20771]: audit 2026-03-09T21:12:46.852783+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:48.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:47 vm07 bash[28052]: cluster 2026-03-09T21:12:46.594030+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:48.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:47 vm07 bash[28052]: cluster 2026-03-09T21:12:46.594030+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:48.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:47 vm07 bash[28052]: audit 2026-03-09T21:12:46.848581+0000 mon.a (mon.0) 362 : audit [INF] from='osd.2 v2:192.168.123.107:6809/2553486713' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T21:12:48.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:47 vm07 bash[28052]: audit 2026-03-09T21:12:46.848581+0000 mon.a (mon.0) 362 : audit [INF] from='osd.2 v2:192.168.123.107:6809/2553486713' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T21:12:48.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:47 vm07 bash[28052]: cluster 2026-03-09T21:12:46.852487+0000 mon.a (mon.0) 363 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T21:12:48.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:47 vm07 bash[28052]: cluster 2026-03-09T21:12:46.852487+0000 mon.a (mon.0) 363 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T21:12:48.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:47 vm07 bash[28052]: audit 2026-03-09T21:12:46.852670+0000 mon.a (mon.0) 364 : audit [INF] from='osd.2 v2:192.168.123.107:6809/2553486713' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:12:48.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:47 vm07 bash[28052]: audit 2026-03-09T21:12:46.852670+0000 mon.a (mon.0) 364 : audit [INF] from='osd.2 v2:192.168.123.107:6809/2553486713' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:12:48.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:47 vm07 bash[28052]: audit 2026-03-09T21:12:46.852783+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:48.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:47 vm07 bash[28052]: audit 2026-03-09T21:12:46.852783+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:49.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:48 vm07 bash[20771]: audit 2026-03-09T21:12:47.852819+0000 mon.a (mon.0) 366 : audit [INF] from='osd.2 v2:192.168.123.107:6809/2553486713' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T21:12:49.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:48 vm07 bash[20771]: audit 2026-03-09T21:12:47.852819+0000 mon.a (mon.0) 366 : audit [INF] from='osd.2 v2:192.168.123.107:6809/2553486713' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T21:12:49.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:48 vm07 bash[20771]: cluster 2026-03-09T21:12:47.858297+0000 mon.a (mon.0) 367 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T21:12:49.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:48 vm07 bash[20771]: cluster 2026-03-09T21:12:47.858297+0000 mon.a (mon.0) 367 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T21:12:49.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:48 vm07 bash[20771]: audit 2026-03-09T21:12:47.863629+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:49.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:48 vm07 bash[20771]: audit 2026-03-09T21:12:47.863629+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:49.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:48 vm07 bash[20771]: audit 2026-03-09T21:12:47.866100+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:49.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:48 vm07 bash[20771]: audit 2026-03-09T21:12:47.866100+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:49.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:48 vm07 bash[20771]: audit 2026-03-09T21:12:48.869140+0000 mon.a (mon.0) 370 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:49.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:48 vm07 bash[20771]: audit 2026-03-09T21:12:48.869140+0000 mon.a (mon.0) 370 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:49.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:48 vm07 bash[28052]: audit 2026-03-09T21:12:47.852819+0000 mon.a (mon.0) 366 : audit [INF] from='osd.2 v2:192.168.123.107:6809/2553486713' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T21:12:49.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:48 vm07 bash[28052]: audit 2026-03-09T21:12:47.852819+0000 mon.a (mon.0) 366 : audit [INF] from='osd.2 v2:192.168.123.107:6809/2553486713' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T21:12:49.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:48 vm07 bash[28052]: cluster 2026-03-09T21:12:47.858297+0000 mon.a (mon.0) 367 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T21:12:49.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:48 vm07 bash[28052]: cluster 2026-03-09T21:12:47.858297+0000 mon.a (mon.0) 367 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T21:12:49.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:48 vm07 bash[28052]: audit 2026-03-09T21:12:47.863629+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:49.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:48 vm07 bash[28052]: audit 2026-03-09T21:12:47.863629+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:49.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:48 vm07 bash[28052]: audit 2026-03-09T21:12:47.866100+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:49.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:48 vm07 bash[28052]: audit 2026-03-09T21:12:47.866100+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:49.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:48 vm07 bash[28052]: audit 2026-03-09T21:12:48.869140+0000 mon.a (mon.0) 370 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:49.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:48 vm07 bash[28052]: audit 2026-03-09T21:12:48.869140+0000 mon.a (mon.0) 370 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:49.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:48 vm10 bash[23387]: audit 2026-03-09T21:12:47.852819+0000 mon.a (mon.0) 366 : audit [INF] from='osd.2 v2:192.168.123.107:6809/2553486713' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T21:12:49.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:48 vm10 bash[23387]: audit 2026-03-09T21:12:47.852819+0000 mon.a (mon.0) 366 : audit [INF] from='osd.2 v2:192.168.123.107:6809/2553486713' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T21:12:49.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:48 vm10 bash[23387]: cluster 2026-03-09T21:12:47.858297+0000 mon.a (mon.0) 367 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T21:12:49.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:48 vm10 bash[23387]: cluster 2026-03-09T21:12:47.858297+0000 mon.a (mon.0) 367 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T21:12:49.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:48 vm10 bash[23387]: audit 2026-03-09T21:12:47.863629+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:49.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:48 vm10 bash[23387]: audit 2026-03-09T21:12:47.863629+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:49.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:48 vm10 bash[23387]: audit 2026-03-09T21:12:47.866100+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:49.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:48 vm10 bash[23387]: audit 2026-03-09T21:12:47.866100+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:49.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:48 vm10 bash[23387]: audit 2026-03-09T21:12:48.869140+0000 mon.a (mon.0) 370 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:49.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:48 vm10 bash[23387]: audit 2026-03-09T21:12:48.869140+0000 mon.a (mon.0) 370 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:50.161 INFO:teuthology.orchestra.run.vm07.stdout:Created osd(s) 2 on host 'vm07' 2026-03-09T21:12:50.175 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:49 vm07 bash[20771]: cluster 2026-03-09T21:12:47.193467+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:12:50.175 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:49 vm07 bash[20771]: cluster 2026-03-09T21:12:47.193467+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:12:50.175 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:49 vm07 bash[20771]: cluster 2026-03-09T21:12:47.193523+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:12:50.175 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:49 vm07 bash[20771]: cluster 2026-03-09T21:12:47.193523+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:12:50.175 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:49 vm07 bash[20771]: cluster 2026-03-09T21:12:48.594300+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:50.175 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:49 vm07 bash[20771]: cluster 2026-03-09T21:12:48.594300+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:50.175 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:49 vm07 bash[20771]: audit 2026-03-09T21:12:48.917677+0000 mon.a (mon.0) 371 : audit [INF] from='osd.2 v2:192.168.123.107:6809/2553486713' entity='osd.2' 2026-03-09T21:12:50.175 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:49 vm07 bash[20771]: audit 2026-03-09T21:12:48.917677+0000 mon.a (mon.0) 371 : audit [INF] from='osd.2 v2:192.168.123.107:6809/2553486713' entity='osd.2' 2026-03-09T21:12:50.175 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:49 vm07 bash[20771]: audit 2026-03-09T21:12:49.104190+0000 mon.a (mon.0) 372 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:50.175 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:49 vm07 bash[20771]: audit 2026-03-09T21:12:49.104190+0000 mon.a (mon.0) 372 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:50.175 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:49 vm07 bash[20771]: audit 2026-03-09T21:12:49.110425+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:50.175 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:49 vm07 bash[20771]: audit 2026-03-09T21:12:49.110425+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:50.175 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:49 vm07 bash[20771]: audit 2026-03-09T21:12:49.544940+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:50.175 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:49 vm07 bash[20771]: audit 2026-03-09T21:12:49.544940+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:50.175 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:49 vm07 bash[20771]: audit 2026-03-09T21:12:49.545528+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:12:50.175 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:49 vm07 bash[20771]: audit 2026-03-09T21:12:49.545528+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:12:50.175 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:49 vm07 bash[20771]: audit 2026-03-09T21:12:49.551644+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:50.175 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:49 vm07 bash[20771]: audit 2026-03-09T21:12:49.551644+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:50.175 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:49 vm07 bash[20771]: audit 2026-03-09T21:12:49.866601+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:50.175 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:49 vm07 bash[20771]: audit 2026-03-09T21:12:49.866601+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:50.176 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:49 vm07 bash[28052]: cluster 2026-03-09T21:12:47.193467+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:12:50.176 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:49 vm07 bash[28052]: cluster 2026-03-09T21:12:47.193467+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:12:50.176 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:49 vm07 bash[28052]: cluster 2026-03-09T21:12:47.193523+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:12:50.176 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:49 vm07 bash[28052]: cluster 2026-03-09T21:12:47.193523+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:12:50.176 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:49 vm07 bash[28052]: cluster 2026-03-09T21:12:48.594300+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:50.176 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:49 vm07 bash[28052]: cluster 2026-03-09T21:12:48.594300+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:50.176 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:49 vm07 bash[28052]: audit 2026-03-09T21:12:48.917677+0000 mon.a (mon.0) 371 : audit [INF] from='osd.2 v2:192.168.123.107:6809/2553486713' entity='osd.2' 2026-03-09T21:12:50.176 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:49 vm07 bash[28052]: audit 2026-03-09T21:12:48.917677+0000 mon.a (mon.0) 371 : audit [INF] from='osd.2 v2:192.168.123.107:6809/2553486713' entity='osd.2' 2026-03-09T21:12:50.176 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:49 vm07 bash[28052]: audit 2026-03-09T21:12:49.104190+0000 mon.a (mon.0) 372 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:50.176 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:49 vm07 bash[28052]: audit 2026-03-09T21:12:49.104190+0000 mon.a (mon.0) 372 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:50.176 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:49 vm07 bash[28052]: audit 2026-03-09T21:12:49.110425+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:50.176 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:49 vm07 bash[28052]: audit 2026-03-09T21:12:49.110425+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:50.176 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:49 vm07 bash[28052]: audit 2026-03-09T21:12:49.544940+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:50.176 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:49 vm07 bash[28052]: audit 2026-03-09T21:12:49.544940+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:50.176 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:49 vm07 bash[28052]: audit 2026-03-09T21:12:49.545528+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:12:50.176 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:49 vm07 bash[28052]: audit 2026-03-09T21:12:49.545528+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:12:50.176 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:49 vm07 bash[28052]: audit 2026-03-09T21:12:49.551644+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:50.176 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:49 vm07 bash[28052]: audit 2026-03-09T21:12:49.551644+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:50.176 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:49 vm07 bash[28052]: audit 2026-03-09T21:12:49.866601+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:50.176 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:49 vm07 bash[28052]: audit 2026-03-09T21:12:49.866601+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:49 vm10 bash[23387]: cluster 2026-03-09T21:12:47.193467+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:12:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:49 vm10 bash[23387]: cluster 2026-03-09T21:12:47.193467+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:12:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:49 vm10 bash[23387]: cluster 2026-03-09T21:12:47.193523+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:12:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:49 vm10 bash[23387]: cluster 2026-03-09T21:12:47.193523+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:12:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:49 vm10 bash[23387]: cluster 2026-03-09T21:12:48.594300+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:49 vm10 bash[23387]: cluster 2026-03-09T21:12:48.594300+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:49 vm10 bash[23387]: audit 2026-03-09T21:12:48.917677+0000 mon.a (mon.0) 371 : audit [INF] from='osd.2 v2:192.168.123.107:6809/2553486713' entity='osd.2' 2026-03-09T21:12:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:49 vm10 bash[23387]: audit 2026-03-09T21:12:48.917677+0000 mon.a (mon.0) 371 : audit [INF] from='osd.2 v2:192.168.123.107:6809/2553486713' entity='osd.2' 2026-03-09T21:12:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:49 vm10 bash[23387]: audit 2026-03-09T21:12:49.104190+0000 mon.a (mon.0) 372 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:49 vm10 bash[23387]: audit 2026-03-09T21:12:49.104190+0000 mon.a (mon.0) 372 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:49 vm10 bash[23387]: audit 2026-03-09T21:12:49.110425+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:49 vm10 bash[23387]: audit 2026-03-09T21:12:49.110425+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:49 vm10 bash[23387]: audit 2026-03-09T21:12:49.544940+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:49 vm10 bash[23387]: audit 2026-03-09T21:12:49.544940+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:49 vm10 bash[23387]: audit 2026-03-09T21:12:49.545528+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:12:50.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:49 vm10 bash[23387]: audit 2026-03-09T21:12:49.545528+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:12:50.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:49 vm10 bash[23387]: audit 2026-03-09T21:12:49.551644+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:50.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:49 vm10 bash[23387]: audit 2026-03-09T21:12:49.551644+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:50.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:49 vm10 bash[23387]: audit 2026-03-09T21:12:49.866601+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:50.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:49 vm10 bash[23387]: audit 2026-03-09T21:12:49.866601+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:50.264 DEBUG:teuthology.orchestra.run.vm07:osd.2> sudo journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@osd.2.service 2026-03-09T21:12:50.265 INFO:tasks.cephadm:Deploying osd.3 on vm07 with /dev/vdb... 2026-03-09T21:12:50.265 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- lvm zap /dev/vdb 2026-03-09T21:12:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:50 vm10 bash[23387]: cluster 2026-03-09T21:12:49.935209+0000 mon.a (mon.0) 378 : cluster [INF] osd.2 v2:192.168.123.107:6809/2553486713 boot 2026-03-09T21:12:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:50 vm10 bash[23387]: cluster 2026-03-09T21:12:49.935209+0000 mon.a (mon.0) 378 : cluster [INF] osd.2 v2:192.168.123.107:6809/2553486713 boot 2026-03-09T21:12:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:50 vm10 bash[23387]: cluster 2026-03-09T21:12:49.935251+0000 mon.a (mon.0) 379 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T21:12:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:50 vm10 bash[23387]: cluster 2026-03-09T21:12:49.935251+0000 mon.a (mon.0) 379 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T21:12:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:50 vm10 bash[23387]: audit 2026-03-09T21:12:49.935953+0000 mon.a (mon.0) 380 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:50 vm10 bash[23387]: audit 2026-03-09T21:12:49.935953+0000 mon.a (mon.0) 380 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:50 vm10 bash[23387]: audit 2026-03-09T21:12:50.145036+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:12:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:50 vm10 bash[23387]: audit 2026-03-09T21:12:50.145036+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:12:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:50 vm10 bash[23387]: audit 2026-03-09T21:12:50.151858+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:50 vm10 bash[23387]: audit 2026-03-09T21:12:50.151858+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:50 vm10 bash[23387]: audit 2026-03-09T21:12:50.158308+0000 mon.a (mon.0) 383 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:50 vm10 bash[23387]: audit 2026-03-09T21:12:50.158308+0000 mon.a (mon.0) 383 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:50 vm10 bash[23387]: audit 2026-03-09T21:12:50.638411+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T21:12:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:50 vm10 bash[23387]: audit 2026-03-09T21:12:50.638411+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T21:12:51.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:50 vm07 bash[28052]: cluster 2026-03-09T21:12:49.935209+0000 mon.a (mon.0) 378 : cluster [INF] osd.2 v2:192.168.123.107:6809/2553486713 boot 2026-03-09T21:12:51.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:50 vm07 bash[28052]: cluster 2026-03-09T21:12:49.935209+0000 mon.a (mon.0) 378 : cluster [INF] osd.2 v2:192.168.123.107:6809/2553486713 boot 2026-03-09T21:12:51.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:50 vm07 bash[28052]: cluster 2026-03-09T21:12:49.935251+0000 mon.a (mon.0) 379 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T21:12:51.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:50 vm07 bash[28052]: cluster 2026-03-09T21:12:49.935251+0000 mon.a (mon.0) 379 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T21:12:51.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:50 vm07 bash[28052]: audit 2026-03-09T21:12:49.935953+0000 mon.a (mon.0) 380 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:51.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:50 vm07 bash[28052]: audit 2026-03-09T21:12:49.935953+0000 mon.a (mon.0) 380 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:51.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:50 vm07 bash[28052]: audit 2026-03-09T21:12:50.145036+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:12:51.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:50 vm07 bash[28052]: audit 2026-03-09T21:12:50.145036+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:12:51.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:50 vm07 bash[28052]: audit 2026-03-09T21:12:50.151858+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:51.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:50 vm07 bash[28052]: audit 2026-03-09T21:12:50.151858+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:51.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:50 vm07 bash[28052]: audit 2026-03-09T21:12:50.158308+0000 mon.a (mon.0) 383 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:51.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:50 vm07 bash[28052]: audit 2026-03-09T21:12:50.158308+0000 mon.a (mon.0) 383 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:51.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:50 vm07 bash[28052]: audit 2026-03-09T21:12:50.638411+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T21:12:51.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:50 vm07 bash[28052]: audit 2026-03-09T21:12:50.638411+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T21:12:51.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:50 vm07 bash[20771]: cluster 2026-03-09T21:12:49.935209+0000 mon.a (mon.0) 378 : cluster [INF] osd.2 v2:192.168.123.107:6809/2553486713 boot 2026-03-09T21:12:51.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:50 vm07 bash[20771]: cluster 2026-03-09T21:12:49.935209+0000 mon.a (mon.0) 378 : cluster [INF] osd.2 v2:192.168.123.107:6809/2553486713 boot 2026-03-09T21:12:51.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:50 vm07 bash[20771]: cluster 2026-03-09T21:12:49.935251+0000 mon.a (mon.0) 379 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T21:12:51.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:50 vm07 bash[20771]: cluster 2026-03-09T21:12:49.935251+0000 mon.a (mon.0) 379 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T21:12:51.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:50 vm07 bash[20771]: audit 2026-03-09T21:12:49.935953+0000 mon.a (mon.0) 380 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:51.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:50 vm07 bash[20771]: audit 2026-03-09T21:12:49.935953+0000 mon.a (mon.0) 380 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:12:51.367 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:50 vm07 bash[20771]: audit 2026-03-09T21:12:50.145036+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:12:51.367 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:50 vm07 bash[20771]: audit 2026-03-09T21:12:50.145036+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:12:51.367 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:50 vm07 bash[20771]: audit 2026-03-09T21:12:50.151858+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:51.367 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:50 vm07 bash[20771]: audit 2026-03-09T21:12:50.151858+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:51.367 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:50 vm07 bash[20771]: audit 2026-03-09T21:12:50.158308+0000 mon.a (mon.0) 383 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:51.367 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:50 vm07 bash[20771]: audit 2026-03-09T21:12:50.158308+0000 mon.a (mon.0) 383 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:51.367 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:50 vm07 bash[20771]: audit 2026-03-09T21:12:50.638411+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T21:12:51.367 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:50 vm07 bash[20771]: audit 2026-03-09T21:12:50.638411+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T21:12:52.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:52 vm07 bash[20771]: cluster 2026-03-09T21:12:50.594599+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:52.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:52 vm07 bash[20771]: cluster 2026-03-09T21:12:50.594599+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:52.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:52 vm07 bash[20771]: audit 2026-03-09T21:12:51.219478+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T21:12:52.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:52 vm07 bash[20771]: audit 2026-03-09T21:12:51.219478+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T21:12:52.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:52 vm07 bash[20771]: cluster 2026-03-09T21:12:51.224253+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T21:12:52.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:52 vm07 bash[20771]: cluster 2026-03-09T21:12:51.224253+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T21:12:52.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:52 vm07 bash[20771]: audit 2026-03-09T21:12:51.226118+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T21:12:52.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:52 vm07 bash[20771]: audit 2026-03-09T21:12:51.226118+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T21:12:52.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:52 vm07 bash[28052]: cluster 2026-03-09T21:12:50.594599+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:52.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:52 vm07 bash[28052]: cluster 2026-03-09T21:12:50.594599+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:52.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:52 vm07 bash[28052]: audit 2026-03-09T21:12:51.219478+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T21:12:52.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:52 vm07 bash[28052]: audit 2026-03-09T21:12:51.219478+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T21:12:52.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:52 vm07 bash[28052]: cluster 2026-03-09T21:12:51.224253+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T21:12:52.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:52 vm07 bash[28052]: cluster 2026-03-09T21:12:51.224253+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T21:12:52.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:52 vm07 bash[28052]: audit 2026-03-09T21:12:51.226118+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T21:12:52.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:52 vm07 bash[28052]: audit 2026-03-09T21:12:51.226118+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T21:12:52.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:52 vm10 bash[23387]: cluster 2026-03-09T21:12:50.594599+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:52.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:52 vm10 bash[23387]: cluster 2026-03-09T21:12:50.594599+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T21:12:52.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:52 vm10 bash[23387]: audit 2026-03-09T21:12:51.219478+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T21:12:52.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:52 vm10 bash[23387]: audit 2026-03-09T21:12:51.219478+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T21:12:52.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:52 vm10 bash[23387]: cluster 2026-03-09T21:12:51.224253+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T21:12:52.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:52 vm10 bash[23387]: cluster 2026-03-09T21:12:51.224253+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T21:12:52.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:52 vm10 bash[23387]: audit 2026-03-09T21:12:51.226118+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T21:12:52.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:52 vm10 bash[23387]: audit 2026-03-09T21:12:51.226118+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T21:12:53.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:53 vm10 bash[23387]: audit 2026-03-09T21:12:52.223327+0000 mon.a (mon.0) 388 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T21:12:53.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:53 vm10 bash[23387]: audit 2026-03-09T21:12:52.223327+0000 mon.a (mon.0) 388 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T21:12:53.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:53 vm10 bash[23387]: cluster 2026-03-09T21:12:52.227776+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T21:12:53.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:53 vm10 bash[23387]: cluster 2026-03-09T21:12:52.227776+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T21:12:53.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:53 vm07 bash[20771]: audit 2026-03-09T21:12:52.223327+0000 mon.a (mon.0) 388 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T21:12:53.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:53 vm07 bash[20771]: audit 2026-03-09T21:12:52.223327+0000 mon.a (mon.0) 388 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T21:12:53.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:53 vm07 bash[20771]: cluster 2026-03-09T21:12:52.227776+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T21:12:53.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:53 vm07 bash[20771]: cluster 2026-03-09T21:12:52.227776+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T21:12:53.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:53 vm07 bash[28052]: audit 2026-03-09T21:12:52.223327+0000 mon.a (mon.0) 388 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T21:12:53.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:53 vm07 bash[28052]: audit 2026-03-09T21:12:52.223327+0000 mon.a (mon.0) 388 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T21:12:53.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:53 vm07 bash[28052]: cluster 2026-03-09T21:12:52.227776+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T21:12:53.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:53 vm07 bash[28052]: cluster 2026-03-09T21:12:52.227776+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: cluster 2026-03-09T21:12:52.594897+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v89: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: cluster 2026-03-09T21:12:52.594897+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v89: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: cluster 2026-03-09T21:12:53.436176+0000 mon.a (mon.0) 390 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: cluster 2026-03-09T21:12:53.436176+0000 mon.a (mon.0) 390 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: audit 2026-03-09T21:12:53.524169+0000 mon.a (mon.0) 391 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: audit 2026-03-09T21:12:53.524169+0000 mon.a (mon.0) 391 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: audit 2026-03-09T21:12:53.541266+0000 mon.a (mon.0) 392 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: audit 2026-03-09T21:12:53.541266+0000 mon.a (mon.0) 392 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: audit 2026-03-09T21:12:53.541705+0000 mon.a (mon.0) 393 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: audit 2026-03-09T21:12:53.541705+0000 mon.a (mon.0) 393 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: audit 2026-03-09T21:12:53.541823+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: audit 2026-03-09T21:12:53.541823+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: audit 2026-03-09T21:12:53.541886+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: audit 2026-03-09T21:12:53.541886+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: audit 2026-03-09T21:12:53.543648+0000 mon.a (mon.0) 396 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: audit 2026-03-09T21:12:53.543648+0000 mon.a (mon.0) 396 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: audit 2026-03-09T21:12:53.543716+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: audit 2026-03-09T21:12:53.543716+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: audit 2026-03-09T21:12:53.543905+0000 mon.a (mon.0) 398 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: audit 2026-03-09T21:12:53.543905+0000 mon.a (mon.0) 398 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: audit 2026-03-09T21:12:53.544180+0000 mon.b (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: audit 2026-03-09T21:12:53.544180+0000 mon.b (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: audit 2026-03-09T21:12:53.561403+0000 mon.b (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: audit 2026-03-09T21:12:53.561403+0000 mon.b (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: audit 2026-03-09T21:12:53.562397+0000 mon.c (mon.2) 7 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: audit 2026-03-09T21:12:53.562397+0000 mon.c (mon.2) 7 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: audit 2026-03-09T21:12:53.563440+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: audit 2026-03-09T21:12:53.563440+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: audit 2026-03-09T21:12:53.563493+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: audit 2026-03-09T21:12:53.563493+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: audit 2026-03-09T21:12:53.563529+0000 mon.a (mon.0) 401 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: audit 2026-03-09T21:12:53.563529+0000 mon.a (mon.0) 401 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:12:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: audit 2026-03-09T21:12:53.580251+0000 mon.c (mon.2) 8 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T21:12:54.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:54 vm10 bash[23387]: audit 2026-03-09T21:12:53.580251+0000 mon.c (mon.2) 8 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T21:12:54.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: cluster 2026-03-09T21:12:52.594897+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v89: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:12:54.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: cluster 2026-03-09T21:12:52.594897+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v89: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:12:54.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: cluster 2026-03-09T21:12:53.436176+0000 mon.a (mon.0) 390 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T21:12:54.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: cluster 2026-03-09T21:12:53.436176+0000 mon.a (mon.0) 390 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T21:12:54.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: audit 2026-03-09T21:12:53.524169+0000 mon.a (mon.0) 391 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T21:12:54.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: audit 2026-03-09T21:12:53.524169+0000 mon.a (mon.0) 391 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T21:12:54.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: audit 2026-03-09T21:12:53.541266+0000 mon.a (mon.0) 392 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T21:12:54.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: audit 2026-03-09T21:12:53.541266+0000 mon.a (mon.0) 392 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T21:12:54.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: audit 2026-03-09T21:12:53.541705+0000 mon.a (mon.0) 393 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:12:54.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: audit 2026-03-09T21:12:53.541705+0000 mon.a (mon.0) 393 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:12:54.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: audit 2026-03-09T21:12:53.541823+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:12:54.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: audit 2026-03-09T21:12:53.541823+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:12:54.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: audit 2026-03-09T21:12:53.541886+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:12:54.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: audit 2026-03-09T21:12:53.541886+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:12:54.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: audit 2026-03-09T21:12:53.543648+0000 mon.a (mon.0) 396 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: audit 2026-03-09T21:12:53.543648+0000 mon.a (mon.0) 396 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: audit 2026-03-09T21:12:53.543716+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: audit 2026-03-09T21:12:53.543716+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: audit 2026-03-09T21:12:53.543905+0000 mon.a (mon.0) 398 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: audit 2026-03-09T21:12:53.543905+0000 mon.a (mon.0) 398 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: audit 2026-03-09T21:12:53.544180+0000 mon.b (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: audit 2026-03-09T21:12:53.544180+0000 mon.b (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: audit 2026-03-09T21:12:53.561403+0000 mon.b (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: audit 2026-03-09T21:12:53.561403+0000 mon.b (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: audit 2026-03-09T21:12:53.562397+0000 mon.c (mon.2) 7 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: audit 2026-03-09T21:12:53.562397+0000 mon.c (mon.2) 7 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: audit 2026-03-09T21:12:53.563440+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: audit 2026-03-09T21:12:53.563440+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: audit 2026-03-09T21:12:53.563493+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: audit 2026-03-09T21:12:53.563493+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: audit 2026-03-09T21:12:53.563529+0000 mon.a (mon.0) 401 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: audit 2026-03-09T21:12:53.563529+0000 mon.a (mon.0) 401 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: audit 2026-03-09T21:12:53.580251+0000 mon.c (mon.2) 8 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:54 vm07 bash[20771]: audit 2026-03-09T21:12:53.580251+0000 mon.c (mon.2) 8 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: cluster 2026-03-09T21:12:52.594897+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v89: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: cluster 2026-03-09T21:12:52.594897+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v89: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: cluster 2026-03-09T21:12:53.436176+0000 mon.a (mon.0) 390 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: cluster 2026-03-09T21:12:53.436176+0000 mon.a (mon.0) 390 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: audit 2026-03-09T21:12:53.524169+0000 mon.a (mon.0) 391 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: audit 2026-03-09T21:12:53.524169+0000 mon.a (mon.0) 391 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: audit 2026-03-09T21:12:53.541266+0000 mon.a (mon.0) 392 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: audit 2026-03-09T21:12:53.541266+0000 mon.a (mon.0) 392 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: audit 2026-03-09T21:12:53.541705+0000 mon.a (mon.0) 393 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: audit 2026-03-09T21:12:53.541705+0000 mon.a (mon.0) 393 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: audit 2026-03-09T21:12:53.541823+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: audit 2026-03-09T21:12:53.541823+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: audit 2026-03-09T21:12:53.541886+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: audit 2026-03-09T21:12:53.541886+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: audit 2026-03-09T21:12:53.543648+0000 mon.a (mon.0) 396 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: audit 2026-03-09T21:12:53.543648+0000 mon.a (mon.0) 396 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: audit 2026-03-09T21:12:53.543716+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: audit 2026-03-09T21:12:53.543716+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: audit 2026-03-09T21:12:53.543905+0000 mon.a (mon.0) 398 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: audit 2026-03-09T21:12:53.543905+0000 mon.a (mon.0) 398 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: audit 2026-03-09T21:12:53.544180+0000 mon.b (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: audit 2026-03-09T21:12:53.544180+0000 mon.b (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: audit 2026-03-09T21:12:53.561403+0000 mon.b (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: audit 2026-03-09T21:12:53.561403+0000 mon.b (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: audit 2026-03-09T21:12:53.562397+0000 mon.c (mon.2) 7 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: audit 2026-03-09T21:12:53.562397+0000 mon.c (mon.2) 7 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: audit 2026-03-09T21:12:53.563440+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: audit 2026-03-09T21:12:53.563440+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: audit 2026-03-09T21:12:53.563493+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: audit 2026-03-09T21:12:53.563493+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: audit 2026-03-09T21:12:53.563529+0000 mon.a (mon.0) 401 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: audit 2026-03-09T21:12:53.563529+0000 mon.a (mon.0) 401 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: audit 2026-03-09T21:12:53.580251+0000 mon.c (mon.2) 8 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T21:12:54.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:54 vm07 bash[28052]: audit 2026-03-09T21:12:53.580251+0000 mon.c (mon.2) 8 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T21:12:54.939 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:12:56.590 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:12:56.607 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph orch daemon add osd vm07:/dev/vdb 2026-03-09T21:12:56.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:56 vm07 bash[20771]: cluster 2026-03-09T21:12:54.595198+0000 mgr.y (mgr.14150) 120 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:12:56.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:56 vm07 bash[20771]: cluster 2026-03-09T21:12:54.595198+0000 mgr.y (mgr.14150) 120 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:12:56.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:56 vm07 bash[20771]: cluster 2026-03-09T21:12:55.456879+0000 mon.a (mon.0) 402 : cluster [DBG] mgrmap e15: y(active, since 2m), standbys: x 2026-03-09T21:12:56.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:56 vm07 bash[20771]: cluster 2026-03-09T21:12:55.456879+0000 mon.a (mon.0) 402 : cluster [DBG] mgrmap e15: y(active, since 2m), standbys: x 2026-03-09T21:12:56.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:56 vm07 bash[20771]: audit 2026-03-09T21:12:55.801929+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:56.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:56 vm07 bash[20771]: audit 2026-03-09T21:12:55.801929+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:56.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:56 vm07 bash[20771]: audit 2026-03-09T21:12:55.809457+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:56.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:56 vm07 bash[20771]: audit 2026-03-09T21:12:55.809457+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:56.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:56 vm07 bash[20771]: audit 2026-03-09T21:12:55.811704+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:12:56.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:56 vm07 bash[20771]: audit 2026-03-09T21:12:55.811704+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:12:56.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:56 vm07 bash[20771]: audit 2026-03-09T21:12:55.819484+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:56.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:56 vm07 bash[20771]: audit 2026-03-09T21:12:55.819484+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:56.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:56 vm07 bash[20771]: audit 2026-03-09T21:12:55.820365+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:12:56.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:56 vm07 bash[20771]: audit 2026-03-09T21:12:55.820365+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:12:56.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:56 vm07 bash[20771]: audit 2026-03-09T21:12:55.825373+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:56.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:56 vm07 bash[20771]: audit 2026-03-09T21:12:55.825373+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:56.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:56 vm07 bash[28052]: cluster 2026-03-09T21:12:54.595198+0000 mgr.y (mgr.14150) 120 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:12:56.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:56 vm07 bash[28052]: cluster 2026-03-09T21:12:54.595198+0000 mgr.y (mgr.14150) 120 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:12:56.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:56 vm07 bash[28052]: cluster 2026-03-09T21:12:55.456879+0000 mon.a (mon.0) 402 : cluster [DBG] mgrmap e15: y(active, since 2m), standbys: x 2026-03-09T21:12:56.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:56 vm07 bash[28052]: cluster 2026-03-09T21:12:55.456879+0000 mon.a (mon.0) 402 : cluster [DBG] mgrmap e15: y(active, since 2m), standbys: x 2026-03-09T21:12:56.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:56 vm07 bash[28052]: audit 2026-03-09T21:12:55.801929+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:56.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:56 vm07 bash[28052]: audit 2026-03-09T21:12:55.801929+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:56.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:56 vm07 bash[28052]: audit 2026-03-09T21:12:55.809457+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:56.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:56 vm07 bash[28052]: audit 2026-03-09T21:12:55.809457+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:56.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:56 vm07 bash[28052]: audit 2026-03-09T21:12:55.811704+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:12:56.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:56 vm07 bash[28052]: audit 2026-03-09T21:12:55.811704+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:12:56.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:56 vm07 bash[28052]: audit 2026-03-09T21:12:55.819484+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:56.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:56 vm07 bash[28052]: audit 2026-03-09T21:12:55.819484+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:56.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:56 vm07 bash[28052]: audit 2026-03-09T21:12:55.820365+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:12:56.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:56 vm07 bash[28052]: audit 2026-03-09T21:12:55.820365+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:12:56.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:56 vm07 bash[28052]: audit 2026-03-09T21:12:55.825373+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:56.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:56 vm07 bash[28052]: audit 2026-03-09T21:12:55.825373+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:56 vm10 bash[23387]: cluster 2026-03-09T21:12:54.595198+0000 mgr.y (mgr.14150) 120 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:12:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:56 vm10 bash[23387]: cluster 2026-03-09T21:12:54.595198+0000 mgr.y (mgr.14150) 120 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:12:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:56 vm10 bash[23387]: cluster 2026-03-09T21:12:55.456879+0000 mon.a (mon.0) 402 : cluster [DBG] mgrmap e15: y(active, since 2m), standbys: x 2026-03-09T21:12:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:56 vm10 bash[23387]: cluster 2026-03-09T21:12:55.456879+0000 mon.a (mon.0) 402 : cluster [DBG] mgrmap e15: y(active, since 2m), standbys: x 2026-03-09T21:12:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:56 vm10 bash[23387]: audit 2026-03-09T21:12:55.801929+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:56 vm10 bash[23387]: audit 2026-03-09T21:12:55.801929+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:56 vm10 bash[23387]: audit 2026-03-09T21:12:55.809457+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:56 vm10 bash[23387]: audit 2026-03-09T21:12:55.809457+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:56 vm10 bash[23387]: audit 2026-03-09T21:12:55.811704+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:12:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:56 vm10 bash[23387]: audit 2026-03-09T21:12:55.811704+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:12:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:56 vm10 bash[23387]: audit 2026-03-09T21:12:55.819484+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:56 vm10 bash[23387]: audit 2026-03-09T21:12:55.819484+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:12:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:56 vm10 bash[23387]: audit 2026-03-09T21:12:55.820365+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:12:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:56 vm10 bash[23387]: audit 2026-03-09T21:12:55.820365+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:12:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:56 vm10 bash[23387]: audit 2026-03-09T21:12:55.825373+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:56 vm10 bash[23387]: audit 2026-03-09T21:12:55.825373+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:12:57.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:57 vm07 bash[28052]: cephadm 2026-03-09T21:12:55.794914+0000 mgr.y (mgr.14150) 121 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T21:12:57.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:57 vm07 bash[28052]: cephadm 2026-03-09T21:12:55.794914+0000 mgr.y (mgr.14150) 121 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T21:12:57.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:57 vm07 bash[20771]: cephadm 2026-03-09T21:12:55.794914+0000 mgr.y (mgr.14150) 121 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T21:12:57.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:57 vm07 bash[20771]: cephadm 2026-03-09T21:12:55.794914+0000 mgr.y (mgr.14150) 121 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T21:12:57.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:57 vm10 bash[23387]: cephadm 2026-03-09T21:12:55.794914+0000 mgr.y (mgr.14150) 121 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T21:12:57.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:57 vm10 bash[23387]: cephadm 2026-03-09T21:12:55.794914+0000 mgr.y (mgr.14150) 121 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T21:12:58.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:58 vm07 bash[28052]: cluster 2026-03-09T21:12:56.595518+0000 mgr.y (mgr.14150) 122 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:12:58.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:12:58 vm07 bash[28052]: cluster 2026-03-09T21:12:56.595518+0000 mgr.y (mgr.14150) 122 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:12:58.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:58 vm07 bash[20771]: cluster 2026-03-09T21:12:56.595518+0000 mgr.y (mgr.14150) 122 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:12:58.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:12:58 vm07 bash[20771]: cluster 2026-03-09T21:12:56.595518+0000 mgr.y (mgr.14150) 122 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:12:58.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:58 vm10 bash[23387]: cluster 2026-03-09T21:12:56.595518+0000 mgr.y (mgr.14150) 122 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:12:58.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:12:58 vm10 bash[23387]: cluster 2026-03-09T21:12:56.595518+0000 mgr.y (mgr.14150) 122 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:00.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:00 vm07 bash[28052]: cluster 2026-03-09T21:12:58.595771+0000 mgr.y (mgr.14150) 123 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:00.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:00 vm07 bash[28052]: cluster 2026-03-09T21:12:58.595771+0000 mgr.y (mgr.14150) 123 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:00.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:00 vm07 bash[20771]: cluster 2026-03-09T21:12:58.595771+0000 mgr.y (mgr.14150) 123 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:00.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:00 vm07 bash[20771]: cluster 2026-03-09T21:12:58.595771+0000 mgr.y (mgr.14150) 123 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:00.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:00 vm10 bash[23387]: cluster 2026-03-09T21:12:58.595771+0000 mgr.y (mgr.14150) 123 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:00.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:00 vm10 bash[23387]: cluster 2026-03-09T21:12:58.595771+0000 mgr.y (mgr.14150) 123 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:01.234 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:13:02.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:02 vm07 bash[20771]: cluster 2026-03-09T21:13:00.595975+0000 mgr.y (mgr.14150) 124 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:02.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:02 vm07 bash[20771]: cluster 2026-03-09T21:13:00.595975+0000 mgr.y (mgr.14150) 124 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:02.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:02 vm07 bash[20771]: audit 2026-03-09T21:13:01.510641+0000 mgr.y (mgr.14150) 125 : audit [DBG] from='client.24185 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:13:02.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:02 vm07 bash[20771]: audit 2026-03-09T21:13:01.510641+0000 mgr.y (mgr.14150) 125 : audit [DBG] from='client.24185 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:13:02.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:02 vm07 bash[20771]: audit 2026-03-09T21:13:01.511973+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:13:02.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:02 vm07 bash[20771]: audit 2026-03-09T21:13:01.511973+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:13:02.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:02 vm07 bash[20771]: audit 2026-03-09T21:13:01.513552+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:13:02.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:02 vm07 bash[20771]: audit 2026-03-09T21:13:01.513552+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:13:02.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:02 vm07 bash[20771]: audit 2026-03-09T21:13:01.514021+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:02.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:02 vm07 bash[20771]: audit 2026-03-09T21:13:01.514021+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:02.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:02 vm07 bash[28052]: cluster 2026-03-09T21:13:00.595975+0000 mgr.y (mgr.14150) 124 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:02.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:02 vm07 bash[28052]: cluster 2026-03-09T21:13:00.595975+0000 mgr.y (mgr.14150) 124 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:02.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:02 vm07 bash[28052]: audit 2026-03-09T21:13:01.510641+0000 mgr.y (mgr.14150) 125 : audit [DBG] from='client.24185 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:13:02.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:02 vm07 bash[28052]: audit 2026-03-09T21:13:01.510641+0000 mgr.y (mgr.14150) 125 : audit [DBG] from='client.24185 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:13:02.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:02 vm07 bash[28052]: audit 2026-03-09T21:13:01.511973+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:13:02.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:02 vm07 bash[28052]: audit 2026-03-09T21:13:01.511973+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:13:02.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:02 vm07 bash[28052]: audit 2026-03-09T21:13:01.513552+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:13:02.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:02 vm07 bash[28052]: audit 2026-03-09T21:13:01.513552+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:13:02.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:02 vm07 bash[28052]: audit 2026-03-09T21:13:01.514021+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:02.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:02 vm07 bash[28052]: audit 2026-03-09T21:13:01.514021+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:02.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:02 vm10 bash[23387]: cluster 2026-03-09T21:13:00.595975+0000 mgr.y (mgr.14150) 124 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:02.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:02 vm10 bash[23387]: cluster 2026-03-09T21:13:00.595975+0000 mgr.y (mgr.14150) 124 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:02.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:02 vm10 bash[23387]: audit 2026-03-09T21:13:01.510641+0000 mgr.y (mgr.14150) 125 : audit [DBG] from='client.24185 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:13:02.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:02 vm10 bash[23387]: audit 2026-03-09T21:13:01.510641+0000 mgr.y (mgr.14150) 125 : audit [DBG] from='client.24185 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:13:02.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:02 vm10 bash[23387]: audit 2026-03-09T21:13:01.511973+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:13:02.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:02 vm10 bash[23387]: audit 2026-03-09T21:13:01.511973+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:13:02.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:02 vm10 bash[23387]: audit 2026-03-09T21:13:01.513552+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:13:02.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:02 vm10 bash[23387]: audit 2026-03-09T21:13:01.513552+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:13:02.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:02 vm10 bash[23387]: audit 2026-03-09T21:13:01.514021+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:02.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:02 vm10 bash[23387]: audit 2026-03-09T21:13:01.514021+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:03.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:03 vm07 bash[20771]: cluster 2026-03-09T21:13:02.596214+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:03.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:03 vm07 bash[20771]: cluster 2026-03-09T21:13:02.596214+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:03.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:03 vm07 bash[28052]: cluster 2026-03-09T21:13:02.596214+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:03.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:03 vm07 bash[28052]: cluster 2026-03-09T21:13:02.596214+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:03.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:03 vm10 bash[23387]: cluster 2026-03-09T21:13:02.596214+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:03.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:03 vm10 bash[23387]: cluster 2026-03-09T21:13:02.596214+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:05.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:05 vm10 bash[23387]: cluster 2026-03-09T21:13:04.596437+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:05.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:05 vm10 bash[23387]: cluster 2026-03-09T21:13:04.596437+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:06.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:05 vm07 bash[20771]: cluster 2026-03-09T21:13:04.596437+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:06.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:05 vm07 bash[20771]: cluster 2026-03-09T21:13:04.596437+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:06.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:05 vm07 bash[28052]: cluster 2026-03-09T21:13:04.596437+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:06.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:05 vm07 bash[28052]: cluster 2026-03-09T21:13:04.596437+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:07.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:07 vm10 bash[23387]: cluster 2026-03-09T21:13:06.596751+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:07.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:07 vm10 bash[23387]: cluster 2026-03-09T21:13:06.596751+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:07.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:07 vm10 bash[23387]: audit 2026-03-09T21:13:06.930828+0000 mon.a (mon.0) 412 : audit [INF] from='client.? 192.168.123.107:0/3226952634' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "82b53895-a55e-4a96-84b2-f1efa2657688"}]: dispatch 2026-03-09T21:13:07.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:07 vm10 bash[23387]: audit 2026-03-09T21:13:06.930828+0000 mon.a (mon.0) 412 : audit [INF] from='client.? 192.168.123.107:0/3226952634' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "82b53895-a55e-4a96-84b2-f1efa2657688"}]: dispatch 2026-03-09T21:13:07.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:07 vm10 bash[23387]: audit 2026-03-09T21:13:06.966959+0000 mon.a (mon.0) 413 : audit [INF] from='client.? 192.168.123.107:0/3226952634' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "82b53895-a55e-4a96-84b2-f1efa2657688"}]': finished 2026-03-09T21:13:07.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:07 vm10 bash[23387]: audit 2026-03-09T21:13:06.966959+0000 mon.a (mon.0) 413 : audit [INF] from='client.? 192.168.123.107:0/3226952634' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "82b53895-a55e-4a96-84b2-f1efa2657688"}]': finished 2026-03-09T21:13:07.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:07 vm10 bash[23387]: cluster 2026-03-09T21:13:06.974392+0000 mon.a (mon.0) 414 : cluster [DBG] osdmap e22: 4 total, 3 up, 4 in 2026-03-09T21:13:07.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:07 vm10 bash[23387]: cluster 2026-03-09T21:13:06.974392+0000 mon.a (mon.0) 414 : cluster [DBG] osdmap e22: 4 total, 3 up, 4 in 2026-03-09T21:13:07.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:07 vm10 bash[23387]: audit 2026-03-09T21:13:06.974679+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:07.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:07 vm10 bash[23387]: audit 2026-03-09T21:13:06.974679+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:07.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:07 vm10 bash[23387]: audit 2026-03-09T21:13:07.610921+0000 mon.c (mon.2) 9 : audit [DBG] from='client.? 192.168.123.107:0/2062571360' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:13:07.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:07 vm10 bash[23387]: audit 2026-03-09T21:13:07.610921+0000 mon.c (mon.2) 9 : audit [DBG] from='client.? 192.168.123.107:0/2062571360' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:13:08.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:07 vm07 bash[20771]: cluster 2026-03-09T21:13:06.596751+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:08.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:07 vm07 bash[20771]: cluster 2026-03-09T21:13:06.596751+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:08.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:07 vm07 bash[20771]: audit 2026-03-09T21:13:06.930828+0000 mon.a (mon.0) 412 : audit [INF] from='client.? 192.168.123.107:0/3226952634' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "82b53895-a55e-4a96-84b2-f1efa2657688"}]: dispatch 2026-03-09T21:13:08.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:07 vm07 bash[20771]: audit 2026-03-09T21:13:06.930828+0000 mon.a (mon.0) 412 : audit [INF] from='client.? 192.168.123.107:0/3226952634' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "82b53895-a55e-4a96-84b2-f1efa2657688"}]: dispatch 2026-03-09T21:13:08.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:07 vm07 bash[20771]: audit 2026-03-09T21:13:06.966959+0000 mon.a (mon.0) 413 : audit [INF] from='client.? 192.168.123.107:0/3226952634' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "82b53895-a55e-4a96-84b2-f1efa2657688"}]': finished 2026-03-09T21:13:08.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:07 vm07 bash[20771]: audit 2026-03-09T21:13:06.966959+0000 mon.a (mon.0) 413 : audit [INF] from='client.? 192.168.123.107:0/3226952634' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "82b53895-a55e-4a96-84b2-f1efa2657688"}]': finished 2026-03-09T21:13:08.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:07 vm07 bash[20771]: cluster 2026-03-09T21:13:06.974392+0000 mon.a (mon.0) 414 : cluster [DBG] osdmap e22: 4 total, 3 up, 4 in 2026-03-09T21:13:08.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:07 vm07 bash[20771]: cluster 2026-03-09T21:13:06.974392+0000 mon.a (mon.0) 414 : cluster [DBG] osdmap e22: 4 total, 3 up, 4 in 2026-03-09T21:13:08.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:07 vm07 bash[20771]: audit 2026-03-09T21:13:06.974679+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:08.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:07 vm07 bash[20771]: audit 2026-03-09T21:13:06.974679+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:08.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:07 vm07 bash[20771]: audit 2026-03-09T21:13:07.610921+0000 mon.c (mon.2) 9 : audit [DBG] from='client.? 192.168.123.107:0/2062571360' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:13:08.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:07 vm07 bash[20771]: audit 2026-03-09T21:13:07.610921+0000 mon.c (mon.2) 9 : audit [DBG] from='client.? 192.168.123.107:0/2062571360' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:13:08.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:07 vm07 bash[28052]: cluster 2026-03-09T21:13:06.596751+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:08.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:07 vm07 bash[28052]: cluster 2026-03-09T21:13:06.596751+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:08.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:07 vm07 bash[28052]: audit 2026-03-09T21:13:06.930828+0000 mon.a (mon.0) 412 : audit [INF] from='client.? 192.168.123.107:0/3226952634' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "82b53895-a55e-4a96-84b2-f1efa2657688"}]: dispatch 2026-03-09T21:13:08.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:07 vm07 bash[28052]: audit 2026-03-09T21:13:06.930828+0000 mon.a (mon.0) 412 : audit [INF] from='client.? 192.168.123.107:0/3226952634' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "82b53895-a55e-4a96-84b2-f1efa2657688"}]: dispatch 2026-03-09T21:13:08.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:07 vm07 bash[28052]: audit 2026-03-09T21:13:06.966959+0000 mon.a (mon.0) 413 : audit [INF] from='client.? 192.168.123.107:0/3226952634' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "82b53895-a55e-4a96-84b2-f1efa2657688"}]': finished 2026-03-09T21:13:08.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:07 vm07 bash[28052]: audit 2026-03-09T21:13:06.966959+0000 mon.a (mon.0) 413 : audit [INF] from='client.? 192.168.123.107:0/3226952634' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "82b53895-a55e-4a96-84b2-f1efa2657688"}]': finished 2026-03-09T21:13:08.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:07 vm07 bash[28052]: cluster 2026-03-09T21:13:06.974392+0000 mon.a (mon.0) 414 : cluster [DBG] osdmap e22: 4 total, 3 up, 4 in 2026-03-09T21:13:08.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:07 vm07 bash[28052]: cluster 2026-03-09T21:13:06.974392+0000 mon.a (mon.0) 414 : cluster [DBG] osdmap e22: 4 total, 3 up, 4 in 2026-03-09T21:13:08.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:07 vm07 bash[28052]: audit 2026-03-09T21:13:06.974679+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:08.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:07 vm07 bash[28052]: audit 2026-03-09T21:13:06.974679+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:08.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:07 vm07 bash[28052]: audit 2026-03-09T21:13:07.610921+0000 mon.c (mon.2) 9 : audit [DBG] from='client.? 192.168.123.107:0/2062571360' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:13:08.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:07 vm07 bash[28052]: audit 2026-03-09T21:13:07.610921+0000 mon.c (mon.2) 9 : audit [DBG] from='client.? 192.168.123.107:0/2062571360' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:13:09.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:09 vm10 bash[23387]: cluster 2026-03-09T21:13:08.597054+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:09.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:09 vm10 bash[23387]: cluster 2026-03-09T21:13:08.597054+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:10.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:09 vm07 bash[28052]: cluster 2026-03-09T21:13:08.597054+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:10.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:09 vm07 bash[28052]: cluster 2026-03-09T21:13:08.597054+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:10.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:09 vm07 bash[20771]: cluster 2026-03-09T21:13:08.597054+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:10.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:09 vm07 bash[20771]: cluster 2026-03-09T21:13:08.597054+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:11.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:11 vm10 bash[23387]: cluster 2026-03-09T21:13:10.597388+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:11.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:11 vm10 bash[23387]: cluster 2026-03-09T21:13:10.597388+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:12.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:11 vm07 bash[20771]: cluster 2026-03-09T21:13:10.597388+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:12.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:11 vm07 bash[20771]: cluster 2026-03-09T21:13:10.597388+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:12.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:11 vm07 bash[28052]: cluster 2026-03-09T21:13:10.597388+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:12.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:11 vm07 bash[28052]: cluster 2026-03-09T21:13:10.597388+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:13.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:13 vm10 bash[23387]: cluster 2026-03-09T21:13:12.597684+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:13.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:13 vm10 bash[23387]: cluster 2026-03-09T21:13:12.597684+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:14.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:13 vm07 bash[28052]: cluster 2026-03-09T21:13:12.597684+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:14.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:13 vm07 bash[28052]: cluster 2026-03-09T21:13:12.597684+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:14.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:13 vm07 bash[20771]: cluster 2026-03-09T21:13:12.597684+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:14.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:13 vm07 bash[20771]: cluster 2026-03-09T21:13:12.597684+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:15.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:15 vm10 bash[23387]: cluster 2026-03-09T21:13:14.597983+0000 mgr.y (mgr.14150) 132 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:15.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:15 vm10 bash[23387]: cluster 2026-03-09T21:13:14.597983+0000 mgr.y (mgr.14150) 132 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:15.995 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:15 vm07 bash[20771]: cluster 2026-03-09T21:13:14.597983+0000 mgr.y (mgr.14150) 132 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:15.995 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:15 vm07 bash[20771]: cluster 2026-03-09T21:13:14.597983+0000 mgr.y (mgr.14150) 132 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:15.995 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:15 vm07 bash[28052]: cluster 2026-03-09T21:13:14.597983+0000 mgr.y (mgr.14150) 132 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:15.995 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:15 vm07 bash[28052]: cluster 2026-03-09T21:13:14.597983+0000 mgr.y (mgr.14150) 132 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:16.790 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:16 vm07 bash[20771]: audit 2026-03-09T21:13:16.268537+0000 mon.a (mon.0) 416 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T21:13:16.791 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:16 vm07 bash[20771]: audit 2026-03-09T21:13:16.268537+0000 mon.a (mon.0) 416 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T21:13:16.791 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:16 vm07 bash[20771]: audit 2026-03-09T21:13:16.269280+0000 mon.a (mon.0) 417 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:16.791 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:16 vm07 bash[20771]: audit 2026-03-09T21:13:16.269280+0000 mon.a (mon.0) 417 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:16.791 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:16 vm07 bash[20771]: cephadm 2026-03-09T21:13:16.269965+0000 mgr.y (mgr.14150) 133 : cephadm [INF] Deploying daemon osd.3 on vm07 2026-03-09T21:13:16.791 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:16 vm07 bash[20771]: cephadm 2026-03-09T21:13:16.269965+0000 mgr.y (mgr.14150) 133 : cephadm [INF] Deploying daemon osd.3 on vm07 2026-03-09T21:13:16.791 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:16 vm07 bash[28052]: audit 2026-03-09T21:13:16.268537+0000 mon.a (mon.0) 416 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T21:13:16.791 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:16 vm07 bash[28052]: audit 2026-03-09T21:13:16.268537+0000 mon.a (mon.0) 416 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T21:13:16.791 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:16 vm07 bash[28052]: audit 2026-03-09T21:13:16.269280+0000 mon.a (mon.0) 417 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:16.791 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:16 vm07 bash[28052]: audit 2026-03-09T21:13:16.269280+0000 mon.a (mon.0) 417 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:16.791 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:16 vm07 bash[28052]: cephadm 2026-03-09T21:13:16.269965+0000 mgr.y (mgr.14150) 133 : cephadm [INF] Deploying daemon osd.3 on vm07 2026-03-09T21:13:16.791 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:16 vm07 bash[28052]: cephadm 2026-03-09T21:13:16.269965+0000 mgr.y (mgr.14150) 133 : cephadm [INF] Deploying daemon osd.3 on vm07 2026-03-09T21:13:17.182 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:17 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:13:17.182 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:13:17 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:13:17.182 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:17 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:13:17.182 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:13:17 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:13:17.182 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:13:17 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:13:17.183 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 21:13:17 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:13:17.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:16 vm10 bash[23387]: audit 2026-03-09T21:13:16.268537+0000 mon.a (mon.0) 416 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T21:13:17.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:16 vm10 bash[23387]: audit 2026-03-09T21:13:16.268537+0000 mon.a (mon.0) 416 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T21:13:17.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:16 vm10 bash[23387]: audit 2026-03-09T21:13:16.269280+0000 mon.a (mon.0) 417 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:17.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:16 vm10 bash[23387]: audit 2026-03-09T21:13:16.269280+0000 mon.a (mon.0) 417 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:17.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:16 vm10 bash[23387]: cephadm 2026-03-09T21:13:16.269965+0000 mgr.y (mgr.14150) 133 : cephadm [INF] Deploying daemon osd.3 on vm07 2026-03-09T21:13:17.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:16 vm10 bash[23387]: cephadm 2026-03-09T21:13:16.269965+0000 mgr.y (mgr.14150) 133 : cephadm [INF] Deploying daemon osd.3 on vm07 2026-03-09T21:13:17.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:17 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:13:17.616 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:13:17 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:13:17.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:17 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:13:17.616 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:13:17 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:13:17.616 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:13:17 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:13:17.616 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 21:13:17 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:13:18.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:17 vm07 bash[20771]: cluster 2026-03-09T21:13:16.598295+0000 mgr.y (mgr.14150) 134 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:18.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:17 vm07 bash[20771]: cluster 2026-03-09T21:13:16.598295+0000 mgr.y (mgr.14150) 134 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:18.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:17 vm07 bash[20771]: audit 2026-03-09T21:13:17.460588+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:13:18.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:17 vm07 bash[20771]: audit 2026-03-09T21:13:17.460588+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:13:18.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:17 vm07 bash[20771]: audit 2026-03-09T21:13:17.467791+0000 mon.a (mon.0) 419 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:18.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:17 vm07 bash[20771]: audit 2026-03-09T21:13:17.467791+0000 mon.a (mon.0) 419 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:18.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:17 vm07 bash[20771]: audit 2026-03-09T21:13:17.475376+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:18.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:17 vm07 bash[20771]: audit 2026-03-09T21:13:17.475376+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:18.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:17 vm07 bash[28052]: cluster 2026-03-09T21:13:16.598295+0000 mgr.y (mgr.14150) 134 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:18.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:17 vm07 bash[28052]: cluster 2026-03-09T21:13:16.598295+0000 mgr.y (mgr.14150) 134 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:18.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:17 vm07 bash[28052]: audit 2026-03-09T21:13:17.460588+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:13:18.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:17 vm07 bash[28052]: audit 2026-03-09T21:13:17.460588+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:13:18.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:17 vm07 bash[28052]: audit 2026-03-09T21:13:17.467791+0000 mon.a (mon.0) 419 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:18.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:17 vm07 bash[28052]: audit 2026-03-09T21:13:17.467791+0000 mon.a (mon.0) 419 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:18.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:17 vm07 bash[28052]: audit 2026-03-09T21:13:17.475376+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:18.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:17 vm07 bash[28052]: audit 2026-03-09T21:13:17.475376+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:18.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:17 vm10 bash[23387]: cluster 2026-03-09T21:13:16.598295+0000 mgr.y (mgr.14150) 134 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:18.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:17 vm10 bash[23387]: cluster 2026-03-09T21:13:16.598295+0000 mgr.y (mgr.14150) 134 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:18.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:17 vm10 bash[23387]: audit 2026-03-09T21:13:17.460588+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:13:18.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:17 vm10 bash[23387]: audit 2026-03-09T21:13:17.460588+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:13:18.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:17 vm10 bash[23387]: audit 2026-03-09T21:13:17.467791+0000 mon.a (mon.0) 419 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:18.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:17 vm10 bash[23387]: audit 2026-03-09T21:13:17.467791+0000 mon.a (mon.0) 419 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:18.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:17 vm10 bash[23387]: audit 2026-03-09T21:13:17.475376+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:18.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:17 vm10 bash[23387]: audit 2026-03-09T21:13:17.475376+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:19.975 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:19 vm07 bash[20771]: cluster 2026-03-09T21:13:18.598590+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:19.975 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:19 vm07 bash[20771]: cluster 2026-03-09T21:13:18.598590+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:19.975 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:19 vm07 bash[28052]: cluster 2026-03-09T21:13:18.598590+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:19.975 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:19 vm07 bash[28052]: cluster 2026-03-09T21:13:18.598590+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:20.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:19 vm10 bash[23387]: cluster 2026-03-09T21:13:18.598590+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:20.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:19 vm10 bash[23387]: cluster 2026-03-09T21:13:18.598590+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:22.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:21 vm07 bash[20771]: cluster 2026-03-09T21:13:20.598893+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:22.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:21 vm07 bash[20771]: cluster 2026-03-09T21:13:20.598893+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:22.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:21 vm07 bash[20771]: audit 2026-03-09T21:13:21.100362+0000 mon.c (mon.2) 10 : audit [INF] from='osd.3 v2:192.168.123.107:6813/1113345127' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T21:13:22.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:21 vm07 bash[20771]: audit 2026-03-09T21:13:21.100362+0000 mon.c (mon.2) 10 : audit [INF] from='osd.3 v2:192.168.123.107:6813/1113345127' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T21:13:22.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:21 vm07 bash[20771]: audit 2026-03-09T21:13:21.100840+0000 mon.a (mon.0) 421 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T21:13:22.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:21 vm07 bash[20771]: audit 2026-03-09T21:13:21.100840+0000 mon.a (mon.0) 421 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T21:13:22.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:21 vm07 bash[28052]: cluster 2026-03-09T21:13:20.598893+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:22.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:21 vm07 bash[28052]: cluster 2026-03-09T21:13:20.598893+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:22.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:21 vm07 bash[28052]: audit 2026-03-09T21:13:21.100362+0000 mon.c (mon.2) 10 : audit [INF] from='osd.3 v2:192.168.123.107:6813/1113345127' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T21:13:22.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:21 vm07 bash[28052]: audit 2026-03-09T21:13:21.100362+0000 mon.c (mon.2) 10 : audit [INF] from='osd.3 v2:192.168.123.107:6813/1113345127' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T21:13:22.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:21 vm07 bash[28052]: audit 2026-03-09T21:13:21.100840+0000 mon.a (mon.0) 421 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T21:13:22.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:21 vm07 bash[28052]: audit 2026-03-09T21:13:21.100840+0000 mon.a (mon.0) 421 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T21:13:22.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:21 vm10 bash[23387]: cluster 2026-03-09T21:13:20.598893+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:22.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:21 vm10 bash[23387]: cluster 2026-03-09T21:13:20.598893+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:22.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:21 vm10 bash[23387]: audit 2026-03-09T21:13:21.100362+0000 mon.c (mon.2) 10 : audit [INF] from='osd.3 v2:192.168.123.107:6813/1113345127' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T21:13:22.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:21 vm10 bash[23387]: audit 2026-03-09T21:13:21.100362+0000 mon.c (mon.2) 10 : audit [INF] from='osd.3 v2:192.168.123.107:6813/1113345127' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T21:13:22.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:21 vm10 bash[23387]: audit 2026-03-09T21:13:21.100840+0000 mon.a (mon.0) 421 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T21:13:22.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:21 vm10 bash[23387]: audit 2026-03-09T21:13:21.100840+0000 mon.a (mon.0) 421 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T21:13:23.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:22 vm07 bash[20771]: audit 2026-03-09T21:13:21.729976+0000 mon.a (mon.0) 422 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T21:13:23.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:22 vm07 bash[20771]: audit 2026-03-09T21:13:21.729976+0000 mon.a (mon.0) 422 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T21:13:23.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:22 vm07 bash[20771]: cluster 2026-03-09T21:13:21.734239+0000 mon.a (mon.0) 423 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-09T21:13:23.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:22 vm07 bash[20771]: cluster 2026-03-09T21:13:21.734239+0000 mon.a (mon.0) 423 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-09T21:13:23.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:22 vm07 bash[20771]: audit 2026-03-09T21:13:21.734453+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:23.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:22 vm07 bash[20771]: audit 2026-03-09T21:13:21.734453+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:23.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:22 vm07 bash[20771]: audit 2026-03-09T21:13:21.738785+0000 mon.c (mon.2) 11 : audit [INF] from='osd.3 v2:192.168.123.107:6813/1113345127' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:13:23.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:22 vm07 bash[20771]: audit 2026-03-09T21:13:21.738785+0000 mon.c (mon.2) 11 : audit [INF] from='osd.3 v2:192.168.123.107:6813/1113345127' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:13:23.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:22 vm07 bash[20771]: audit 2026-03-09T21:13:21.739223+0000 mon.a (mon.0) 425 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:13:23.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:22 vm07 bash[20771]: audit 2026-03-09T21:13:21.739223+0000 mon.a (mon.0) 425 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:13:23.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:22 vm07 bash[28052]: audit 2026-03-09T21:13:21.729976+0000 mon.a (mon.0) 422 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T21:13:23.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:22 vm07 bash[28052]: audit 2026-03-09T21:13:21.729976+0000 mon.a (mon.0) 422 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T21:13:23.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:22 vm07 bash[28052]: cluster 2026-03-09T21:13:21.734239+0000 mon.a (mon.0) 423 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-09T21:13:23.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:22 vm07 bash[28052]: cluster 2026-03-09T21:13:21.734239+0000 mon.a (mon.0) 423 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-09T21:13:23.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:22 vm07 bash[28052]: audit 2026-03-09T21:13:21.734453+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:23.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:22 vm07 bash[28052]: audit 2026-03-09T21:13:21.734453+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:23.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:22 vm07 bash[28052]: audit 2026-03-09T21:13:21.738785+0000 mon.c (mon.2) 11 : audit [INF] from='osd.3 v2:192.168.123.107:6813/1113345127' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:13:23.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:22 vm07 bash[28052]: audit 2026-03-09T21:13:21.738785+0000 mon.c (mon.2) 11 : audit [INF] from='osd.3 v2:192.168.123.107:6813/1113345127' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:13:23.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:22 vm07 bash[28052]: audit 2026-03-09T21:13:21.739223+0000 mon.a (mon.0) 425 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:13:23.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:22 vm07 bash[28052]: audit 2026-03-09T21:13:21.739223+0000 mon.a (mon.0) 425 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:13:23.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:22 vm10 bash[23387]: audit 2026-03-09T21:13:21.729976+0000 mon.a (mon.0) 422 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T21:13:23.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:22 vm10 bash[23387]: audit 2026-03-09T21:13:21.729976+0000 mon.a (mon.0) 422 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T21:13:23.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:22 vm10 bash[23387]: cluster 2026-03-09T21:13:21.734239+0000 mon.a (mon.0) 423 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-09T21:13:23.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:22 vm10 bash[23387]: cluster 2026-03-09T21:13:21.734239+0000 mon.a (mon.0) 423 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-09T21:13:23.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:22 vm10 bash[23387]: audit 2026-03-09T21:13:21.734453+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:23.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:22 vm10 bash[23387]: audit 2026-03-09T21:13:21.734453+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:23.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:22 vm10 bash[23387]: audit 2026-03-09T21:13:21.738785+0000 mon.c (mon.2) 11 : audit [INF] from='osd.3 v2:192.168.123.107:6813/1113345127' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:13:23.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:22 vm10 bash[23387]: audit 2026-03-09T21:13:21.738785+0000 mon.c (mon.2) 11 : audit [INF] from='osd.3 v2:192.168.123.107:6813/1113345127' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:13:23.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:22 vm10 bash[23387]: audit 2026-03-09T21:13:21.739223+0000 mon.a (mon.0) 425 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:13:23.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:22 vm10 bash[23387]: audit 2026-03-09T21:13:21.739223+0000 mon.a (mon.0) 425 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T21:13:24.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:23 vm07 bash[20771]: cluster 2026-03-09T21:13:22.599244+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:24.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:23 vm07 bash[20771]: cluster 2026-03-09T21:13:22.599244+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:24.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:23 vm07 bash[20771]: audit 2026-03-09T21:13:22.767044+0000 mon.a (mon.0) 426 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T21:13:24.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:23 vm07 bash[20771]: audit 2026-03-09T21:13:22.767044+0000 mon.a (mon.0) 426 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T21:13:24.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:23 vm07 bash[20771]: cluster 2026-03-09T21:13:22.771065+0000 mon.a (mon.0) 427 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-09T21:13:24.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:23 vm07 bash[20771]: cluster 2026-03-09T21:13:22.771065+0000 mon.a (mon.0) 427 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-09T21:13:24.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:23 vm07 bash[20771]: audit 2026-03-09T21:13:22.771774+0000 mon.a (mon.0) 428 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:24.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:23 vm07 bash[20771]: audit 2026-03-09T21:13:22.771774+0000 mon.a (mon.0) 428 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:24.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:23 vm07 bash[20771]: audit 2026-03-09T21:13:22.787671+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:24.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:23 vm07 bash[20771]: audit 2026-03-09T21:13:22.787671+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:24.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:23 vm07 bash[20771]: audit 2026-03-09T21:13:23.753127+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:24.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:23 vm07 bash[20771]: audit 2026-03-09T21:13:23.753127+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:24.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:23 vm07 bash[20771]: audit 2026-03-09T21:13:23.760766+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:24.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:23 vm07 bash[20771]: audit 2026-03-09T21:13:23.760766+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:24.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:23 vm07 bash[20771]: audit 2026-03-09T21:13:23.762942+0000 mon.a (mon.0) 432 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:24.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:23 vm07 bash[20771]: audit 2026-03-09T21:13:23.762942+0000 mon.a (mon.0) 432 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:24.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:23 vm07 bash[20771]: audit 2026-03-09T21:13:23.763573+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:13:24.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:23 vm07 bash[20771]: audit 2026-03-09T21:13:23.763573+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:13:24.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:23 vm07 bash[20771]: audit 2026-03-09T21:13:23.769361+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:24.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:23 vm07 bash[20771]: audit 2026-03-09T21:13:23.769361+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:24.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:23 vm07 bash[28052]: cluster 2026-03-09T21:13:22.599244+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:24.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:23 vm07 bash[28052]: cluster 2026-03-09T21:13:22.599244+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:24.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:23 vm07 bash[28052]: audit 2026-03-09T21:13:22.767044+0000 mon.a (mon.0) 426 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T21:13:24.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:23 vm07 bash[28052]: audit 2026-03-09T21:13:22.767044+0000 mon.a (mon.0) 426 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T21:13:24.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:23 vm07 bash[28052]: cluster 2026-03-09T21:13:22.771065+0000 mon.a (mon.0) 427 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-09T21:13:24.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:23 vm07 bash[28052]: cluster 2026-03-09T21:13:22.771065+0000 mon.a (mon.0) 427 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-09T21:13:24.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:23 vm07 bash[28052]: audit 2026-03-09T21:13:22.771774+0000 mon.a (mon.0) 428 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:24.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:23 vm07 bash[28052]: audit 2026-03-09T21:13:22.771774+0000 mon.a (mon.0) 428 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:24.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:23 vm07 bash[28052]: audit 2026-03-09T21:13:22.787671+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:24.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:23 vm07 bash[28052]: audit 2026-03-09T21:13:22.787671+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:24.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:23 vm07 bash[28052]: audit 2026-03-09T21:13:23.753127+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:24.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:23 vm07 bash[28052]: audit 2026-03-09T21:13:23.753127+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:24.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:23 vm07 bash[28052]: audit 2026-03-09T21:13:23.760766+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:24.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:23 vm07 bash[28052]: audit 2026-03-09T21:13:23.760766+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:24.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:23 vm07 bash[28052]: audit 2026-03-09T21:13:23.762942+0000 mon.a (mon.0) 432 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:24.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:23 vm07 bash[28052]: audit 2026-03-09T21:13:23.762942+0000 mon.a (mon.0) 432 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:24.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:23 vm07 bash[28052]: audit 2026-03-09T21:13:23.763573+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:13:24.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:23 vm07 bash[28052]: audit 2026-03-09T21:13:23.763573+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:13:24.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:23 vm07 bash[28052]: audit 2026-03-09T21:13:23.769361+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:24.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:23 vm07 bash[28052]: audit 2026-03-09T21:13:23.769361+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:24.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:23 vm10 bash[23387]: cluster 2026-03-09T21:13:22.599244+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:24.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:23 vm10 bash[23387]: cluster 2026-03-09T21:13:22.599244+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:24.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:23 vm10 bash[23387]: audit 2026-03-09T21:13:22.767044+0000 mon.a (mon.0) 426 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T21:13:24.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:23 vm10 bash[23387]: audit 2026-03-09T21:13:22.767044+0000 mon.a (mon.0) 426 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T21:13:24.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:23 vm10 bash[23387]: cluster 2026-03-09T21:13:22.771065+0000 mon.a (mon.0) 427 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-09T21:13:24.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:23 vm10 bash[23387]: cluster 2026-03-09T21:13:22.771065+0000 mon.a (mon.0) 427 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-09T21:13:24.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:23 vm10 bash[23387]: audit 2026-03-09T21:13:22.771774+0000 mon.a (mon.0) 428 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:24.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:23 vm10 bash[23387]: audit 2026-03-09T21:13:22.771774+0000 mon.a (mon.0) 428 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:24.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:23 vm10 bash[23387]: audit 2026-03-09T21:13:22.787671+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:24.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:23 vm10 bash[23387]: audit 2026-03-09T21:13:22.787671+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:24.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:23 vm10 bash[23387]: audit 2026-03-09T21:13:23.753127+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:24.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:23 vm10 bash[23387]: audit 2026-03-09T21:13:23.753127+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:24.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:23 vm10 bash[23387]: audit 2026-03-09T21:13:23.760766+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:24.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:23 vm10 bash[23387]: audit 2026-03-09T21:13:23.760766+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:24.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:23 vm10 bash[23387]: audit 2026-03-09T21:13:23.762942+0000 mon.a (mon.0) 432 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:24.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:23 vm10 bash[23387]: audit 2026-03-09T21:13:23.762942+0000 mon.a (mon.0) 432 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:24.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:23 vm10 bash[23387]: audit 2026-03-09T21:13:23.763573+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:13:24.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:23 vm10 bash[23387]: audit 2026-03-09T21:13:23.763573+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:13:24.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:23 vm10 bash[23387]: audit 2026-03-09T21:13:23.769361+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:24.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:23 vm10 bash[23387]: audit 2026-03-09T21:13:23.769361+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:24.873 INFO:teuthology.orchestra.run.vm07.stdout:Created osd(s) 3 on host 'vm07' 2026-03-09T21:13:24.948 DEBUG:teuthology.orchestra.run.vm07:osd.3> sudo journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@osd.3.service 2026-03-09T21:13:24.949 INFO:tasks.cephadm:Deploying osd.4 on vm10 with /dev/vde... 2026-03-09T21:13:24.949 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- lvm zap /dev/vde 2026-03-09T21:13:25.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:24 vm07 bash[20771]: cluster 2026-03-09T21:13:22.119806+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:13:25.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:24 vm07 bash[20771]: cluster 2026-03-09T21:13:22.119806+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:13:25.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:24 vm07 bash[20771]: cluster 2026-03-09T21:13:22.119870+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:13:25.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:24 vm07 bash[20771]: cluster 2026-03-09T21:13:22.119870+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:13:25.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:24 vm07 bash[20771]: audit 2026-03-09T21:13:23.779388+0000 mon.a (mon.0) 435 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:25.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:24 vm07 bash[20771]: audit 2026-03-09T21:13:23.779388+0000 mon.a (mon.0) 435 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:25.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:24 vm07 bash[20771]: audit 2026-03-09T21:13:23.800083+0000 mon.a (mon.0) 436 : audit [INF] from='osd.3 ' entity='osd.3' 2026-03-09T21:13:25.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:24 vm07 bash[20771]: audit 2026-03-09T21:13:23.800083+0000 mon.a (mon.0) 436 : audit [INF] from='osd.3 ' entity='osd.3' 2026-03-09T21:13:25.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:24 vm07 bash[20771]: audit 2026-03-09T21:13:24.777605+0000 mon.a (mon.0) 437 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:25.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:24 vm07 bash[20771]: audit 2026-03-09T21:13:24.777605+0000 mon.a (mon.0) 437 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:25.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:24 vm07 bash[28052]: cluster 2026-03-09T21:13:22.119806+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:13:25.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:24 vm07 bash[28052]: cluster 2026-03-09T21:13:22.119806+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:13:25.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:24 vm07 bash[28052]: cluster 2026-03-09T21:13:22.119870+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:13:25.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:24 vm07 bash[28052]: cluster 2026-03-09T21:13:22.119870+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:13:25.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:24 vm07 bash[28052]: audit 2026-03-09T21:13:23.779388+0000 mon.a (mon.0) 435 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:25.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:24 vm07 bash[28052]: audit 2026-03-09T21:13:23.779388+0000 mon.a (mon.0) 435 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:25.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:24 vm07 bash[28052]: audit 2026-03-09T21:13:23.800083+0000 mon.a (mon.0) 436 : audit [INF] from='osd.3 ' entity='osd.3' 2026-03-09T21:13:25.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:24 vm07 bash[28052]: audit 2026-03-09T21:13:23.800083+0000 mon.a (mon.0) 436 : audit [INF] from='osd.3 ' entity='osd.3' 2026-03-09T21:13:25.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:24 vm07 bash[28052]: audit 2026-03-09T21:13:24.777605+0000 mon.a (mon.0) 437 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:25.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:24 vm07 bash[28052]: audit 2026-03-09T21:13:24.777605+0000 mon.a (mon.0) 437 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:24 vm10 bash[23387]: cluster 2026-03-09T21:13:22.119806+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:13:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:24 vm10 bash[23387]: cluster 2026-03-09T21:13:22.119806+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:13:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:24 vm10 bash[23387]: cluster 2026-03-09T21:13:22.119870+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:13:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:24 vm10 bash[23387]: cluster 2026-03-09T21:13:22.119870+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:13:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:24 vm10 bash[23387]: audit 2026-03-09T21:13:23.779388+0000 mon.a (mon.0) 435 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:24 vm10 bash[23387]: audit 2026-03-09T21:13:23.779388+0000 mon.a (mon.0) 435 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:24 vm10 bash[23387]: audit 2026-03-09T21:13:23.800083+0000 mon.a (mon.0) 436 : audit [INF] from='osd.3 ' entity='osd.3' 2026-03-09T21:13:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:24 vm10 bash[23387]: audit 2026-03-09T21:13:23.800083+0000 mon.a (mon.0) 436 : audit [INF] from='osd.3 ' entity='osd.3' 2026-03-09T21:13:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:24 vm10 bash[23387]: audit 2026-03-09T21:13:24.777605+0000 mon.a (mon.0) 437 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:24 vm10 bash[23387]: audit 2026-03-09T21:13:24.777605+0000 mon.a (mon.0) 437 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:26.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:25 vm07 bash[20771]: cluster 2026-03-09T21:13:24.599584+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:26.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:25 vm07 bash[20771]: cluster 2026-03-09T21:13:24.599584+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:26.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:25 vm07 bash[20771]: cluster 2026-03-09T21:13:24.804192+0000 mon.a (mon.0) 438 : cluster [INF] osd.3 v2:192.168.123.107:6813/1113345127 boot 2026-03-09T21:13:26.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:25 vm07 bash[20771]: cluster 2026-03-09T21:13:24.804192+0000 mon.a (mon.0) 438 : cluster [INF] osd.3 v2:192.168.123.107:6813/1113345127 boot 2026-03-09T21:13:26.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:25 vm07 bash[20771]: cluster 2026-03-09T21:13:24.804344+0000 mon.a (mon.0) 439 : cluster [DBG] osdmap e25: 4 total, 4 up, 4 in 2026-03-09T21:13:26.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:25 vm07 bash[20771]: cluster 2026-03-09T21:13:24.804344+0000 mon.a (mon.0) 439 : cluster [DBG] osdmap e25: 4 total, 4 up, 4 in 2026-03-09T21:13:26.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:25 vm07 bash[20771]: audit 2026-03-09T21:13:24.806621+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:26.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:25 vm07 bash[20771]: audit 2026-03-09T21:13:24.806621+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:26.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:25 vm07 bash[20771]: audit 2026-03-09T21:13:24.857308+0000 mon.a (mon.0) 441 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:13:26.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:25 vm07 bash[20771]: audit 2026-03-09T21:13:24.857308+0000 mon.a (mon.0) 441 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:13:26.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:25 vm07 bash[20771]: audit 2026-03-09T21:13:24.863059+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:26.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:25 vm07 bash[20771]: audit 2026-03-09T21:13:24.863059+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:26.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:25 vm07 bash[20771]: audit 2026-03-09T21:13:24.868577+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:26.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:25 vm07 bash[20771]: audit 2026-03-09T21:13:24.868577+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:26.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:25 vm07 bash[28052]: cluster 2026-03-09T21:13:24.599584+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:26.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:25 vm07 bash[28052]: cluster 2026-03-09T21:13:24.599584+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:26.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:25 vm07 bash[28052]: cluster 2026-03-09T21:13:24.804192+0000 mon.a (mon.0) 438 : cluster [INF] osd.3 v2:192.168.123.107:6813/1113345127 boot 2026-03-09T21:13:26.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:25 vm07 bash[28052]: cluster 2026-03-09T21:13:24.804192+0000 mon.a (mon.0) 438 : cluster [INF] osd.3 v2:192.168.123.107:6813/1113345127 boot 2026-03-09T21:13:26.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:25 vm07 bash[28052]: cluster 2026-03-09T21:13:24.804344+0000 mon.a (mon.0) 439 : cluster [DBG] osdmap e25: 4 total, 4 up, 4 in 2026-03-09T21:13:26.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:25 vm07 bash[28052]: cluster 2026-03-09T21:13:24.804344+0000 mon.a (mon.0) 439 : cluster [DBG] osdmap e25: 4 total, 4 up, 4 in 2026-03-09T21:13:26.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:25 vm07 bash[28052]: audit 2026-03-09T21:13:24.806621+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:26.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:25 vm07 bash[28052]: audit 2026-03-09T21:13:24.806621+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:26.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:25 vm07 bash[28052]: audit 2026-03-09T21:13:24.857308+0000 mon.a (mon.0) 441 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:13:26.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:25 vm07 bash[28052]: audit 2026-03-09T21:13:24.857308+0000 mon.a (mon.0) 441 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:13:26.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:25 vm07 bash[28052]: audit 2026-03-09T21:13:24.863059+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:26.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:25 vm07 bash[28052]: audit 2026-03-09T21:13:24.863059+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:26.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:25 vm07 bash[28052]: audit 2026-03-09T21:13:24.868577+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:26.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:25 vm07 bash[28052]: audit 2026-03-09T21:13:24.868577+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:26.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:25 vm10 bash[23387]: cluster 2026-03-09T21:13:24.599584+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:26.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:25 vm10 bash[23387]: cluster 2026-03-09T21:13:24.599584+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T21:13:26.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:25 vm10 bash[23387]: cluster 2026-03-09T21:13:24.804192+0000 mon.a (mon.0) 438 : cluster [INF] osd.3 v2:192.168.123.107:6813/1113345127 boot 2026-03-09T21:13:26.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:25 vm10 bash[23387]: cluster 2026-03-09T21:13:24.804192+0000 mon.a (mon.0) 438 : cluster [INF] osd.3 v2:192.168.123.107:6813/1113345127 boot 2026-03-09T21:13:26.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:25 vm10 bash[23387]: cluster 2026-03-09T21:13:24.804344+0000 mon.a (mon.0) 439 : cluster [DBG] osdmap e25: 4 total, 4 up, 4 in 2026-03-09T21:13:26.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:25 vm10 bash[23387]: cluster 2026-03-09T21:13:24.804344+0000 mon.a (mon.0) 439 : cluster [DBG] osdmap e25: 4 total, 4 up, 4 in 2026-03-09T21:13:26.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:25 vm10 bash[23387]: audit 2026-03-09T21:13:24.806621+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:26.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:25 vm10 bash[23387]: audit 2026-03-09T21:13:24.806621+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:13:26.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:25 vm10 bash[23387]: audit 2026-03-09T21:13:24.857308+0000 mon.a (mon.0) 441 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:13:26.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:25 vm10 bash[23387]: audit 2026-03-09T21:13:24.857308+0000 mon.a (mon.0) 441 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:13:26.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:25 vm10 bash[23387]: audit 2026-03-09T21:13:24.863059+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:26.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:25 vm10 bash[23387]: audit 2026-03-09T21:13:24.863059+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:26.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:25 vm10 bash[23387]: audit 2026-03-09T21:13:24.868577+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:26.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:25 vm10 bash[23387]: audit 2026-03-09T21:13:24.868577+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:27.136 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:26 vm10 bash[23387]: cluster 2026-03-09T21:13:25.876139+0000 mon.a (mon.0) 444 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-09T21:13:27.136 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:26 vm10 bash[23387]: cluster 2026-03-09T21:13:25.876139+0000 mon.a (mon.0) 444 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-09T21:13:27.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:26 vm07 bash[20771]: cluster 2026-03-09T21:13:25.876139+0000 mon.a (mon.0) 444 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-09T21:13:27.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:26 vm07 bash[20771]: cluster 2026-03-09T21:13:25.876139+0000 mon.a (mon.0) 444 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-09T21:13:27.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:26 vm07 bash[28052]: cluster 2026-03-09T21:13:25.876139+0000 mon.a (mon.0) 444 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-09T21:13:27.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:26 vm07 bash[28052]: cluster 2026-03-09T21:13:25.876139+0000 mon.a (mon.0) 444 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-09T21:13:28.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:27 vm10 bash[23387]: cluster 2026-03-09T21:13:26.599855+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T21:13:28.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:27 vm10 bash[23387]: cluster 2026-03-09T21:13:26.599855+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T21:13:28.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:27 vm07 bash[20771]: cluster 2026-03-09T21:13:26.599855+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T21:13:28.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:27 vm07 bash[20771]: cluster 2026-03-09T21:13:26.599855+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T21:13:28.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:27 vm07 bash[28052]: cluster 2026-03-09T21:13:26.599855+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T21:13:28.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:27 vm07 bash[28052]: cluster 2026-03-09T21:13:26.599855+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T21:13:29.574 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.b/config 2026-03-09T21:13:30.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:29 vm07 bash[20771]: cluster 2026-03-09T21:13:28.600131+0000 mgr.y (mgr.14150) 140 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T21:13:30.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:29 vm07 bash[20771]: cluster 2026-03-09T21:13:28.600131+0000 mgr.y (mgr.14150) 140 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T21:13:30.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:29 vm07 bash[28052]: cluster 2026-03-09T21:13:28.600131+0000 mgr.y (mgr.14150) 140 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T21:13:30.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:29 vm07 bash[28052]: cluster 2026-03-09T21:13:28.600131+0000 mgr.y (mgr.14150) 140 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T21:13:30.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:29 vm10 bash[23387]: cluster 2026-03-09T21:13:28.600131+0000 mgr.y (mgr.14150) 140 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T21:13:30.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:29 vm10 bash[23387]: cluster 2026-03-09T21:13:28.600131+0000 mgr.y (mgr.14150) 140 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T21:13:30.553 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:13:30.576 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph orch daemon add osd vm10:/dev/vde 2026-03-09T21:13:31.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:31 vm07 bash[20771]: cephadm 2026-03-09T21:13:30.594034+0000 mgr.y (mgr.14150) 141 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T21:13:31.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:31 vm07 bash[20771]: cephadm 2026-03-09T21:13:30.594034+0000 mgr.y (mgr.14150) 141 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T21:13:31.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:31 vm07 bash[20771]: audit 2026-03-09T21:13:30.601068+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:31.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:31 vm07 bash[20771]: audit 2026-03-09T21:13:30.601068+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:31.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:31 vm07 bash[20771]: cluster 2026-03-09T21:13:30.601564+0000 mgr.y (mgr.14150) 142 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T21:13:31.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:31 vm07 bash[20771]: cluster 2026-03-09T21:13:30.601564+0000 mgr.y (mgr.14150) 142 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T21:13:31.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:31 vm07 bash[20771]: audit 2026-03-09T21:13:30.607899+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:31.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:31 vm07 bash[20771]: audit 2026-03-09T21:13:30.607899+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:31.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:31 vm07 bash[20771]: audit 2026-03-09T21:13:30.609484+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:13:31.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:31 vm07 bash[20771]: audit 2026-03-09T21:13:30.609484+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:13:31.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:31 vm07 bash[20771]: audit 2026-03-09T21:13:30.610351+0000 mon.a (mon.0) 448 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:31.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:31 vm07 bash[20771]: audit 2026-03-09T21:13:30.610351+0000 mon.a (mon.0) 448 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:31.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:31 vm07 bash[20771]: audit 2026-03-09T21:13:30.610839+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:13:31.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:31 vm07 bash[20771]: audit 2026-03-09T21:13:30.610839+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:13:31.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:31 vm07 bash[20771]: audit 2026-03-09T21:13:30.616460+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:31.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:31 vm07 bash[20771]: audit 2026-03-09T21:13:30.616460+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:31.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:31 vm07 bash[28052]: cephadm 2026-03-09T21:13:30.594034+0000 mgr.y (mgr.14150) 141 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T21:13:31.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:31 vm07 bash[28052]: cephadm 2026-03-09T21:13:30.594034+0000 mgr.y (mgr.14150) 141 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T21:13:31.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:31 vm07 bash[28052]: audit 2026-03-09T21:13:30.601068+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:31.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:31 vm07 bash[28052]: audit 2026-03-09T21:13:30.601068+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:31.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:31 vm07 bash[28052]: cluster 2026-03-09T21:13:30.601564+0000 mgr.y (mgr.14150) 142 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T21:13:31.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:31 vm07 bash[28052]: cluster 2026-03-09T21:13:30.601564+0000 mgr.y (mgr.14150) 142 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T21:13:31.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:31 vm07 bash[28052]: audit 2026-03-09T21:13:30.607899+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:31.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:31 vm07 bash[28052]: audit 2026-03-09T21:13:30.607899+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:31.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:31 vm07 bash[28052]: audit 2026-03-09T21:13:30.609484+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:13:31.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:31 vm07 bash[28052]: audit 2026-03-09T21:13:30.609484+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:13:31.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:31 vm07 bash[28052]: audit 2026-03-09T21:13:30.610351+0000 mon.a (mon.0) 448 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:31.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:31 vm07 bash[28052]: audit 2026-03-09T21:13:30.610351+0000 mon.a (mon.0) 448 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:31.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:31 vm07 bash[28052]: audit 2026-03-09T21:13:30.610839+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:13:31.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:31 vm07 bash[28052]: audit 2026-03-09T21:13:30.610839+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:13:31.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:31 vm07 bash[28052]: audit 2026-03-09T21:13:30.616460+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:31.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:31 vm07 bash[28052]: audit 2026-03-09T21:13:30.616460+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:31.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:31 vm10 bash[23387]: cephadm 2026-03-09T21:13:30.594034+0000 mgr.y (mgr.14150) 141 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T21:13:31.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:31 vm10 bash[23387]: cephadm 2026-03-09T21:13:30.594034+0000 mgr.y (mgr.14150) 141 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T21:13:31.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:31 vm10 bash[23387]: audit 2026-03-09T21:13:30.601068+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:31.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:31 vm10 bash[23387]: audit 2026-03-09T21:13:30.601068+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:31.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:31 vm10 bash[23387]: cluster 2026-03-09T21:13:30.601564+0000 mgr.y (mgr.14150) 142 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T21:13:31.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:31 vm10 bash[23387]: cluster 2026-03-09T21:13:30.601564+0000 mgr.y (mgr.14150) 142 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T21:13:31.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:31 vm10 bash[23387]: audit 2026-03-09T21:13:30.607899+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:31.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:31 vm10 bash[23387]: audit 2026-03-09T21:13:30.607899+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:31.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:31 vm10 bash[23387]: audit 2026-03-09T21:13:30.609484+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:13:31.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:31 vm10 bash[23387]: audit 2026-03-09T21:13:30.609484+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:13:31.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:31 vm10 bash[23387]: audit 2026-03-09T21:13:30.610351+0000 mon.a (mon.0) 448 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:31.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:31 vm10 bash[23387]: audit 2026-03-09T21:13:30.610351+0000 mon.a (mon.0) 448 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:31.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:31 vm10 bash[23387]: audit 2026-03-09T21:13:30.610839+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:13:31.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:31 vm10 bash[23387]: audit 2026-03-09T21:13:30.610839+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:13:31.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:31 vm10 bash[23387]: audit 2026-03-09T21:13:30.616460+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:31.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:31 vm10 bash[23387]: audit 2026-03-09T21:13:30.616460+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:33.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:33 vm10 bash[23387]: cluster 2026-03-09T21:13:32.602000+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:33.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:33 vm10 bash[23387]: cluster 2026-03-09T21:13:32.602000+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:34.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:33 vm07 bash[28052]: cluster 2026-03-09T21:13:32.602000+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:34.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:33 vm07 bash[28052]: cluster 2026-03-09T21:13:32.602000+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:34.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:33 vm07 bash[20771]: cluster 2026-03-09T21:13:32.602000+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:34.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:33 vm07 bash[20771]: cluster 2026-03-09T21:13:32.602000+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:35.229 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.b/config 2026-03-09T21:13:35.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:35 vm10 bash[23387]: cluster 2026-03-09T21:13:34.602252+0000 mgr.y (mgr.14150) 144 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:35.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:35 vm10 bash[23387]: cluster 2026-03-09T21:13:34.602252+0000 mgr.y (mgr.14150) 144 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:36.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:35 vm07 bash[28052]: cluster 2026-03-09T21:13:34.602252+0000 mgr.y (mgr.14150) 144 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:36.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:35 vm07 bash[28052]: cluster 2026-03-09T21:13:34.602252+0000 mgr.y (mgr.14150) 144 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:36.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:35 vm07 bash[20771]: cluster 2026-03-09T21:13:34.602252+0000 mgr.y (mgr.14150) 144 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:36.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:35 vm07 bash[20771]: cluster 2026-03-09T21:13:34.602252+0000 mgr.y (mgr.14150) 144 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:36.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:36 vm10 bash[23387]: audit 2026-03-09T21:13:35.693157+0000 mgr.y (mgr.14150) 145 : audit [DBG] from='client.24196 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:13:36.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:36 vm10 bash[23387]: audit 2026-03-09T21:13:35.693157+0000 mgr.y (mgr.14150) 145 : audit [DBG] from='client.24196 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:13:36.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:36 vm10 bash[23387]: audit 2026-03-09T21:13:35.695261+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:13:36.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:36 vm10 bash[23387]: audit 2026-03-09T21:13:35.695261+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:13:36.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:36 vm10 bash[23387]: audit 2026-03-09T21:13:35.696885+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:13:36.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:36 vm10 bash[23387]: audit 2026-03-09T21:13:35.696885+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:13:36.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:36 vm10 bash[23387]: audit 2026-03-09T21:13:35.697335+0000 mon.a (mon.0) 453 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:36.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:36 vm10 bash[23387]: audit 2026-03-09T21:13:35.697335+0000 mon.a (mon.0) 453 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:37.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:36 vm07 bash[28052]: audit 2026-03-09T21:13:35.693157+0000 mgr.y (mgr.14150) 145 : audit [DBG] from='client.24196 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:13:37.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:36 vm07 bash[28052]: audit 2026-03-09T21:13:35.693157+0000 mgr.y (mgr.14150) 145 : audit [DBG] from='client.24196 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:13:37.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:36 vm07 bash[28052]: audit 2026-03-09T21:13:35.695261+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:13:37.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:36 vm07 bash[28052]: audit 2026-03-09T21:13:35.695261+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:13:37.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:36 vm07 bash[28052]: audit 2026-03-09T21:13:35.696885+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:13:37.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:36 vm07 bash[28052]: audit 2026-03-09T21:13:35.696885+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:13:37.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:36 vm07 bash[28052]: audit 2026-03-09T21:13:35.697335+0000 mon.a (mon.0) 453 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:37.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:36 vm07 bash[28052]: audit 2026-03-09T21:13:35.697335+0000 mon.a (mon.0) 453 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:37.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:36 vm07 bash[20771]: audit 2026-03-09T21:13:35.693157+0000 mgr.y (mgr.14150) 145 : audit [DBG] from='client.24196 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:13:37.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:36 vm07 bash[20771]: audit 2026-03-09T21:13:35.693157+0000 mgr.y (mgr.14150) 145 : audit [DBG] from='client.24196 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:13:37.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:36 vm07 bash[20771]: audit 2026-03-09T21:13:35.695261+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:13:37.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:36 vm07 bash[20771]: audit 2026-03-09T21:13:35.695261+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:13:37.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:36 vm07 bash[20771]: audit 2026-03-09T21:13:35.696885+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:13:37.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:36 vm07 bash[20771]: audit 2026-03-09T21:13:35.696885+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:13:37.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:36 vm07 bash[20771]: audit 2026-03-09T21:13:35.697335+0000 mon.a (mon.0) 453 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:37.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:36 vm07 bash[20771]: audit 2026-03-09T21:13:35.697335+0000 mon.a (mon.0) 453 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:37.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:37 vm10 bash[23387]: cluster 2026-03-09T21:13:36.602601+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:37.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:37 vm10 bash[23387]: cluster 2026-03-09T21:13:36.602601+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:38.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:37 vm07 bash[20771]: cluster 2026-03-09T21:13:36.602601+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:38.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:37 vm07 bash[20771]: cluster 2026-03-09T21:13:36.602601+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:38.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:37 vm07 bash[28052]: cluster 2026-03-09T21:13:36.602601+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:38.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:37 vm07 bash[28052]: cluster 2026-03-09T21:13:36.602601+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:39.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:39 vm10 bash[23387]: cluster 2026-03-09T21:13:38.602875+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:39.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:39 vm10 bash[23387]: cluster 2026-03-09T21:13:38.602875+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:40.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:39 vm07 bash[28052]: cluster 2026-03-09T21:13:38.602875+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:40.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:39 vm07 bash[28052]: cluster 2026-03-09T21:13:38.602875+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:40.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:39 vm07 bash[20771]: cluster 2026-03-09T21:13:38.602875+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:40.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:39 vm07 bash[20771]: cluster 2026-03-09T21:13:38.602875+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:41.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:41 vm10 bash[23387]: cluster 2026-03-09T21:13:40.603141+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:41.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:41 vm10 bash[23387]: cluster 2026-03-09T21:13:40.603141+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:41.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:41 vm10 bash[23387]: audit 2026-03-09T21:13:41.173661+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.110:0/878094589' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1a2754cd-a1ed-4d39-ac11-6dd3a5aa4512"}]: dispatch 2026-03-09T21:13:41.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:41 vm10 bash[23387]: audit 2026-03-09T21:13:41.173661+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.110:0/878094589' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1a2754cd-a1ed-4d39-ac11-6dd3a5aa4512"}]: dispatch 2026-03-09T21:13:41.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:41 vm10 bash[23387]: audit 2026-03-09T21:13:41.174133+0000 mon.a (mon.0) 454 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1a2754cd-a1ed-4d39-ac11-6dd3a5aa4512"}]: dispatch 2026-03-09T21:13:41.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:41 vm10 bash[23387]: audit 2026-03-09T21:13:41.174133+0000 mon.a (mon.0) 454 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1a2754cd-a1ed-4d39-ac11-6dd3a5aa4512"}]: dispatch 2026-03-09T21:13:41.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:41 vm10 bash[23387]: audit 2026-03-09T21:13:41.178376+0000 mon.a (mon.0) 455 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1a2754cd-a1ed-4d39-ac11-6dd3a5aa4512"}]': finished 2026-03-09T21:13:41.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:41 vm10 bash[23387]: audit 2026-03-09T21:13:41.178376+0000 mon.a (mon.0) 455 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1a2754cd-a1ed-4d39-ac11-6dd3a5aa4512"}]': finished 2026-03-09T21:13:41.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:41 vm10 bash[23387]: cluster 2026-03-09T21:13:41.184796+0000 mon.a (mon.0) 456 : cluster [DBG] osdmap e27: 5 total, 4 up, 5 in 2026-03-09T21:13:41.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:41 vm10 bash[23387]: cluster 2026-03-09T21:13:41.184796+0000 mon.a (mon.0) 456 : cluster [DBG] osdmap e27: 5 total, 4 up, 5 in 2026-03-09T21:13:41.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:41 vm10 bash[23387]: audit 2026-03-09T21:13:41.184962+0000 mon.a (mon.0) 457 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:13:41.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:41 vm10 bash[23387]: audit 2026-03-09T21:13:41.184962+0000 mon.a (mon.0) 457 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:13:42.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:41 vm07 bash[28052]: cluster 2026-03-09T21:13:40.603141+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:42.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:41 vm07 bash[28052]: cluster 2026-03-09T21:13:40.603141+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:42.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:41 vm07 bash[28052]: audit 2026-03-09T21:13:41.173661+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.110:0/878094589' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1a2754cd-a1ed-4d39-ac11-6dd3a5aa4512"}]: dispatch 2026-03-09T21:13:42.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:41 vm07 bash[28052]: audit 2026-03-09T21:13:41.173661+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.110:0/878094589' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1a2754cd-a1ed-4d39-ac11-6dd3a5aa4512"}]: dispatch 2026-03-09T21:13:42.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:41 vm07 bash[28052]: audit 2026-03-09T21:13:41.174133+0000 mon.a (mon.0) 454 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1a2754cd-a1ed-4d39-ac11-6dd3a5aa4512"}]: dispatch 2026-03-09T21:13:42.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:41 vm07 bash[28052]: audit 2026-03-09T21:13:41.174133+0000 mon.a (mon.0) 454 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1a2754cd-a1ed-4d39-ac11-6dd3a5aa4512"}]: dispatch 2026-03-09T21:13:42.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:41 vm07 bash[28052]: audit 2026-03-09T21:13:41.178376+0000 mon.a (mon.0) 455 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1a2754cd-a1ed-4d39-ac11-6dd3a5aa4512"}]': finished 2026-03-09T21:13:42.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:41 vm07 bash[28052]: audit 2026-03-09T21:13:41.178376+0000 mon.a (mon.0) 455 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1a2754cd-a1ed-4d39-ac11-6dd3a5aa4512"}]': finished 2026-03-09T21:13:42.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:41 vm07 bash[28052]: cluster 2026-03-09T21:13:41.184796+0000 mon.a (mon.0) 456 : cluster [DBG] osdmap e27: 5 total, 4 up, 5 in 2026-03-09T21:13:42.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:41 vm07 bash[28052]: cluster 2026-03-09T21:13:41.184796+0000 mon.a (mon.0) 456 : cluster [DBG] osdmap e27: 5 total, 4 up, 5 in 2026-03-09T21:13:42.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:41 vm07 bash[28052]: audit 2026-03-09T21:13:41.184962+0000 mon.a (mon.0) 457 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:13:42.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:41 vm07 bash[28052]: audit 2026-03-09T21:13:41.184962+0000 mon.a (mon.0) 457 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:13:42.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:41 vm07 bash[20771]: cluster 2026-03-09T21:13:40.603141+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:42.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:41 vm07 bash[20771]: cluster 2026-03-09T21:13:40.603141+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:42.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:41 vm07 bash[20771]: audit 2026-03-09T21:13:41.173661+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.110:0/878094589' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1a2754cd-a1ed-4d39-ac11-6dd3a5aa4512"}]: dispatch 2026-03-09T21:13:42.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:41 vm07 bash[20771]: audit 2026-03-09T21:13:41.173661+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.110:0/878094589' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1a2754cd-a1ed-4d39-ac11-6dd3a5aa4512"}]: dispatch 2026-03-09T21:13:42.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:41 vm07 bash[20771]: audit 2026-03-09T21:13:41.174133+0000 mon.a (mon.0) 454 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1a2754cd-a1ed-4d39-ac11-6dd3a5aa4512"}]: dispatch 2026-03-09T21:13:42.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:41 vm07 bash[20771]: audit 2026-03-09T21:13:41.174133+0000 mon.a (mon.0) 454 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1a2754cd-a1ed-4d39-ac11-6dd3a5aa4512"}]: dispatch 2026-03-09T21:13:42.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:41 vm07 bash[20771]: audit 2026-03-09T21:13:41.178376+0000 mon.a (mon.0) 455 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1a2754cd-a1ed-4d39-ac11-6dd3a5aa4512"}]': finished 2026-03-09T21:13:42.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:41 vm07 bash[20771]: audit 2026-03-09T21:13:41.178376+0000 mon.a (mon.0) 455 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1a2754cd-a1ed-4d39-ac11-6dd3a5aa4512"}]': finished 2026-03-09T21:13:42.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:41 vm07 bash[20771]: cluster 2026-03-09T21:13:41.184796+0000 mon.a (mon.0) 456 : cluster [DBG] osdmap e27: 5 total, 4 up, 5 in 2026-03-09T21:13:42.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:41 vm07 bash[20771]: cluster 2026-03-09T21:13:41.184796+0000 mon.a (mon.0) 456 : cluster [DBG] osdmap e27: 5 total, 4 up, 5 in 2026-03-09T21:13:42.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:41 vm07 bash[20771]: audit 2026-03-09T21:13:41.184962+0000 mon.a (mon.0) 457 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:13:42.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:41 vm07 bash[20771]: audit 2026-03-09T21:13:41.184962+0000 mon.a (mon.0) 457 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:13:42.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:42 vm10 bash[23387]: audit 2026-03-09T21:13:41.871290+0000 mon.b (mon.1) 11 : audit [DBG] from='client.? 192.168.123.110:0/1634630529' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:13:42.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:42 vm10 bash[23387]: audit 2026-03-09T21:13:41.871290+0000 mon.b (mon.1) 11 : audit [DBG] from='client.? 192.168.123.110:0/1634630529' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:13:43.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:42 vm07 bash[28052]: audit 2026-03-09T21:13:41.871290+0000 mon.b (mon.1) 11 : audit [DBG] from='client.? 192.168.123.110:0/1634630529' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:13:43.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:42 vm07 bash[28052]: audit 2026-03-09T21:13:41.871290+0000 mon.b (mon.1) 11 : audit [DBG] from='client.? 192.168.123.110:0/1634630529' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:13:43.117 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:42 vm07 bash[20771]: audit 2026-03-09T21:13:41.871290+0000 mon.b (mon.1) 11 : audit [DBG] from='client.? 192.168.123.110:0/1634630529' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:13:43.117 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:42 vm07 bash[20771]: audit 2026-03-09T21:13:41.871290+0000 mon.b (mon.1) 11 : audit [DBG] from='client.? 192.168.123.110:0/1634630529' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:13:43.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:43 vm10 bash[23387]: cluster 2026-03-09T21:13:42.604416+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:43.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:43 vm10 bash[23387]: cluster 2026-03-09T21:13:42.604416+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:44.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:43 vm07 bash[28052]: cluster 2026-03-09T21:13:42.604416+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:44.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:43 vm07 bash[28052]: cluster 2026-03-09T21:13:42.604416+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:44.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:43 vm07 bash[20771]: cluster 2026-03-09T21:13:42.604416+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:44.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:43 vm07 bash[20771]: cluster 2026-03-09T21:13:42.604416+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:45.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:45 vm10 bash[23387]: cluster 2026-03-09T21:13:44.604727+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:45.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:45 vm10 bash[23387]: cluster 2026-03-09T21:13:44.604727+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:46.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:45 vm07 bash[28052]: cluster 2026-03-09T21:13:44.604727+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:46.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:45 vm07 bash[28052]: cluster 2026-03-09T21:13:44.604727+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:46.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:45 vm07 bash[20771]: cluster 2026-03-09T21:13:44.604727+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:46.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:45 vm07 bash[20771]: cluster 2026-03-09T21:13:44.604727+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:47.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:47 vm10 bash[23387]: cluster 2026-03-09T21:13:46.605012+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:47.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:47 vm10 bash[23387]: cluster 2026-03-09T21:13:46.605012+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:48.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:47 vm07 bash[28052]: cluster 2026-03-09T21:13:46.605012+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:48.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:47 vm07 bash[28052]: cluster 2026-03-09T21:13:46.605012+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:48.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:47 vm07 bash[20771]: cluster 2026-03-09T21:13:46.605012+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:48.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:47 vm07 bash[20771]: cluster 2026-03-09T21:13:46.605012+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:49.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:49 vm10 bash[23387]: cluster 2026-03-09T21:13:48.605265+0000 mgr.y (mgr.14150) 152 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:49.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:49 vm10 bash[23387]: cluster 2026-03-09T21:13:48.605265+0000 mgr.y (mgr.14150) 152 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:49 vm07 bash[28052]: cluster 2026-03-09T21:13:48.605265+0000 mgr.y (mgr.14150) 152 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:49 vm07 bash[28052]: cluster 2026-03-09T21:13:48.605265+0000 mgr.y (mgr.14150) 152 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:50.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:49 vm07 bash[20771]: cluster 2026-03-09T21:13:48.605265+0000 mgr.y (mgr.14150) 152 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:50.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:49 vm07 bash[20771]: cluster 2026-03-09T21:13:48.605265+0000 mgr.y (mgr.14150) 152 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:50.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:50 vm10 bash[23387]: audit 2026-03-09T21:13:50.128066+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T21:13:50.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:50 vm10 bash[23387]: audit 2026-03-09T21:13:50.128066+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T21:13:50.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:50 vm10 bash[23387]: audit 2026-03-09T21:13:50.128937+0000 mon.a (mon.0) 459 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:50.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:50 vm10 bash[23387]: audit 2026-03-09T21:13:50.128937+0000 mon.a (mon.0) 459 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:50.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:50 vm10 bash[23387]: cephadm 2026-03-09T21:13:50.129530+0000 mgr.y (mgr.14150) 153 : cephadm [INF] Deploying daemon osd.4 on vm10 2026-03-09T21:13:50.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:50 vm10 bash[23387]: cephadm 2026-03-09T21:13:50.129530+0000 mgr.y (mgr.14150) 153 : cephadm [INF] Deploying daemon osd.4 on vm10 2026-03-09T21:13:51.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:50 vm07 bash[28052]: audit 2026-03-09T21:13:50.128066+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T21:13:51.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:50 vm07 bash[28052]: audit 2026-03-09T21:13:50.128066+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T21:13:51.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:50 vm07 bash[28052]: audit 2026-03-09T21:13:50.128937+0000 mon.a (mon.0) 459 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:51.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:50 vm07 bash[28052]: audit 2026-03-09T21:13:50.128937+0000 mon.a (mon.0) 459 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:51.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:50 vm07 bash[28052]: cephadm 2026-03-09T21:13:50.129530+0000 mgr.y (mgr.14150) 153 : cephadm [INF] Deploying daemon osd.4 on vm10 2026-03-09T21:13:51.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:50 vm07 bash[28052]: cephadm 2026-03-09T21:13:50.129530+0000 mgr.y (mgr.14150) 153 : cephadm [INF] Deploying daemon osd.4 on vm10 2026-03-09T21:13:51.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:50 vm07 bash[20771]: audit 2026-03-09T21:13:50.128066+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T21:13:51.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:50 vm07 bash[20771]: audit 2026-03-09T21:13:50.128066+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T21:13:51.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:50 vm07 bash[20771]: audit 2026-03-09T21:13:50.128937+0000 mon.a (mon.0) 459 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:51.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:50 vm07 bash[20771]: audit 2026-03-09T21:13:50.128937+0000 mon.a (mon.0) 459 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:51.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:50 vm07 bash[20771]: cephadm 2026-03-09T21:13:50.129530+0000 mgr.y (mgr.14150) 153 : cephadm [INF] Deploying daemon osd.4 on vm10 2026-03-09T21:13:51.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:50 vm07 bash[20771]: cephadm 2026-03-09T21:13:50.129530+0000 mgr.y (mgr.14150) 153 : cephadm [INF] Deploying daemon osd.4 on vm10 2026-03-09T21:13:51.326 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:13:51 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:13:51.326 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:51 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:13:51.683 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:51 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:13:51.684 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:13:51 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:13:51.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:51 vm10 bash[23387]: cluster 2026-03-09T21:13:50.605506+0000 mgr.y (mgr.14150) 154 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:51.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:51 vm10 bash[23387]: cluster 2026-03-09T21:13:50.605506+0000 mgr.y (mgr.14150) 154 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:51.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:51 vm10 bash[23387]: audit 2026-03-09T21:13:51.432090+0000 mon.a (mon.0) 460 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:13:51.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:51 vm10 bash[23387]: audit 2026-03-09T21:13:51.432090+0000 mon.a (mon.0) 460 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:13:51.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:51 vm10 bash[23387]: audit 2026-03-09T21:13:51.439444+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:51.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:51 vm10 bash[23387]: audit 2026-03-09T21:13:51.439444+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:51.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:51 vm10 bash[23387]: audit 2026-03-09T21:13:51.445883+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:51.946 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:51 vm10 bash[23387]: audit 2026-03-09T21:13:51.445883+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:52.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:51 vm07 bash[20771]: cluster 2026-03-09T21:13:50.605506+0000 mgr.y (mgr.14150) 154 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:52.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:51 vm07 bash[20771]: cluster 2026-03-09T21:13:50.605506+0000 mgr.y (mgr.14150) 154 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:52.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:51 vm07 bash[20771]: audit 2026-03-09T21:13:51.432090+0000 mon.a (mon.0) 460 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:13:52.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:51 vm07 bash[20771]: audit 2026-03-09T21:13:51.432090+0000 mon.a (mon.0) 460 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:13:52.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:51 vm07 bash[20771]: audit 2026-03-09T21:13:51.439444+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:52.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:51 vm07 bash[20771]: audit 2026-03-09T21:13:51.439444+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:52.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:51 vm07 bash[20771]: audit 2026-03-09T21:13:51.445883+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:52.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:51 vm07 bash[20771]: audit 2026-03-09T21:13:51.445883+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:52.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:51 vm07 bash[28052]: cluster 2026-03-09T21:13:50.605506+0000 mgr.y (mgr.14150) 154 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:52.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:51 vm07 bash[28052]: cluster 2026-03-09T21:13:50.605506+0000 mgr.y (mgr.14150) 154 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:52.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:51 vm07 bash[28052]: audit 2026-03-09T21:13:51.432090+0000 mon.a (mon.0) 460 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:13:52.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:51 vm07 bash[28052]: audit 2026-03-09T21:13:51.432090+0000 mon.a (mon.0) 460 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:13:52.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:51 vm07 bash[28052]: audit 2026-03-09T21:13:51.439444+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:52.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:51 vm07 bash[28052]: audit 2026-03-09T21:13:51.439444+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:52.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:51 vm07 bash[28052]: audit 2026-03-09T21:13:51.445883+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:52.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:51 vm07 bash[28052]: audit 2026-03-09T21:13:51.445883+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:54.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:53 vm07 bash[20771]: cluster 2026-03-09T21:13:52.605820+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:54.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:53 vm07 bash[20771]: cluster 2026-03-09T21:13:52.605820+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:54.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:53 vm07 bash[28052]: cluster 2026-03-09T21:13:52.605820+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:54.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:53 vm07 bash[28052]: cluster 2026-03-09T21:13:52.605820+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:54.149 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:53 vm10 bash[23387]: cluster 2026-03-09T21:13:52.605820+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:54.149 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:53 vm10 bash[23387]: cluster 2026-03-09T21:13:52.605820+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:56.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:55 vm07 bash[20771]: cluster 2026-03-09T21:13:54.606190+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:56.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:55 vm07 bash[20771]: cluster 2026-03-09T21:13:54.606190+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:56.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:55 vm07 bash[20771]: audit 2026-03-09T21:13:55.114206+0000 mon.b (mon.1) 12 : audit [INF] from='osd.4 v2:192.168.123.110:6800/4164782911' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T21:13:56.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:55 vm07 bash[20771]: audit 2026-03-09T21:13:55.114206+0000 mon.b (mon.1) 12 : audit [INF] from='osd.4 v2:192.168.123.110:6800/4164782911' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T21:13:56.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:55 vm07 bash[20771]: audit 2026-03-09T21:13:55.114611+0000 mon.a (mon.0) 463 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T21:13:56.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:55 vm07 bash[20771]: audit 2026-03-09T21:13:55.114611+0000 mon.a (mon.0) 463 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T21:13:56.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:55 vm07 bash[28052]: cluster 2026-03-09T21:13:54.606190+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:56.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:55 vm07 bash[28052]: cluster 2026-03-09T21:13:54.606190+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:56.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:55 vm07 bash[28052]: audit 2026-03-09T21:13:55.114206+0000 mon.b (mon.1) 12 : audit [INF] from='osd.4 v2:192.168.123.110:6800/4164782911' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T21:13:56.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:55 vm07 bash[28052]: audit 2026-03-09T21:13:55.114206+0000 mon.b (mon.1) 12 : audit [INF] from='osd.4 v2:192.168.123.110:6800/4164782911' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T21:13:56.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:55 vm07 bash[28052]: audit 2026-03-09T21:13:55.114611+0000 mon.a (mon.0) 463 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T21:13:56.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:55 vm07 bash[28052]: audit 2026-03-09T21:13:55.114611+0000 mon.a (mon.0) 463 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T21:13:56.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:55 vm10 bash[23387]: cluster 2026-03-09T21:13:54.606190+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:56.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:55 vm10 bash[23387]: cluster 2026-03-09T21:13:54.606190+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:56.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:55 vm10 bash[23387]: audit 2026-03-09T21:13:55.114206+0000 mon.b (mon.1) 12 : audit [INF] from='osd.4 v2:192.168.123.110:6800/4164782911' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T21:13:56.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:55 vm10 bash[23387]: audit 2026-03-09T21:13:55.114206+0000 mon.b (mon.1) 12 : audit [INF] from='osd.4 v2:192.168.123.110:6800/4164782911' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T21:13:56.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:55 vm10 bash[23387]: audit 2026-03-09T21:13:55.114611+0000 mon.a (mon.0) 463 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T21:13:56.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:55 vm10 bash[23387]: audit 2026-03-09T21:13:55.114611+0000 mon.a (mon.0) 463 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T21:13:57.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:56 vm07 bash[20771]: audit 2026-03-09T21:13:55.707729+0000 mon.a (mon.0) 464 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T21:13:57.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:56 vm07 bash[20771]: audit 2026-03-09T21:13:55.707729+0000 mon.a (mon.0) 464 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T21:13:57.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:56 vm07 bash[20771]: audit 2026-03-09T21:13:55.711991+0000 mon.b (mon.1) 13 : audit [INF] from='osd.4 v2:192.168.123.110:6800/4164782911' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:13:57.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:56 vm07 bash[20771]: audit 2026-03-09T21:13:55.711991+0000 mon.b (mon.1) 13 : audit [INF] from='osd.4 v2:192.168.123.110:6800/4164782911' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:13:57.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:56 vm07 bash[20771]: cluster 2026-03-09T21:13:55.712581+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-09T21:13:57.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:56 vm07 bash[20771]: cluster 2026-03-09T21:13:55.712581+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-09T21:13:57.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:56 vm07 bash[20771]: audit 2026-03-09T21:13:55.713592+0000 mon.a (mon.0) 466 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:13:57.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:56 vm07 bash[20771]: audit 2026-03-09T21:13:55.713592+0000 mon.a (mon.0) 466 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:13:57.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:56 vm07 bash[20771]: audit 2026-03-09T21:13:55.713726+0000 mon.a (mon.0) 467 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:13:57.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:56 vm07 bash[20771]: audit 2026-03-09T21:13:55.713726+0000 mon.a (mon.0) 467 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:13:57.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:56 vm07 bash[20771]: audit 2026-03-09T21:13:56.711681+0000 mon.a (mon.0) 468 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-09T21:13:57.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:56 vm07 bash[20771]: audit 2026-03-09T21:13:56.711681+0000 mon.a (mon.0) 468 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-09T21:13:57.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:56 vm07 bash[20771]: cluster 2026-03-09T21:13:56.717886+0000 mon.a (mon.0) 469 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-09T21:13:57.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:56 vm07 bash[20771]: cluster 2026-03-09T21:13:56.717886+0000 mon.a (mon.0) 469 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-09T21:13:57.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:56 vm07 bash[28052]: audit 2026-03-09T21:13:55.707729+0000 mon.a (mon.0) 464 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T21:13:57.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:56 vm07 bash[28052]: audit 2026-03-09T21:13:55.707729+0000 mon.a (mon.0) 464 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T21:13:57.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:56 vm07 bash[28052]: audit 2026-03-09T21:13:55.711991+0000 mon.b (mon.1) 13 : audit [INF] from='osd.4 v2:192.168.123.110:6800/4164782911' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:13:57.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:56 vm07 bash[28052]: audit 2026-03-09T21:13:55.711991+0000 mon.b (mon.1) 13 : audit [INF] from='osd.4 v2:192.168.123.110:6800/4164782911' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:13:57.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:56 vm07 bash[28052]: cluster 2026-03-09T21:13:55.712581+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-09T21:13:57.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:56 vm07 bash[28052]: cluster 2026-03-09T21:13:55.712581+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-09T21:13:57.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:56 vm07 bash[28052]: audit 2026-03-09T21:13:55.713592+0000 mon.a (mon.0) 466 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:13:57.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:56 vm07 bash[28052]: audit 2026-03-09T21:13:55.713592+0000 mon.a (mon.0) 466 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:13:57.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:56 vm07 bash[28052]: audit 2026-03-09T21:13:55.713726+0000 mon.a (mon.0) 467 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:13:57.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:56 vm07 bash[28052]: audit 2026-03-09T21:13:55.713726+0000 mon.a (mon.0) 467 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:13:57.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:56 vm07 bash[28052]: audit 2026-03-09T21:13:56.711681+0000 mon.a (mon.0) 468 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-09T21:13:57.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:56 vm07 bash[28052]: audit 2026-03-09T21:13:56.711681+0000 mon.a (mon.0) 468 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-09T21:13:57.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:56 vm07 bash[28052]: cluster 2026-03-09T21:13:56.717886+0000 mon.a (mon.0) 469 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-09T21:13:57.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:56 vm07 bash[28052]: cluster 2026-03-09T21:13:56.717886+0000 mon.a (mon.0) 469 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-09T21:13:57.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:56 vm10 bash[23387]: audit 2026-03-09T21:13:55.707729+0000 mon.a (mon.0) 464 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T21:13:57.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:56 vm10 bash[23387]: audit 2026-03-09T21:13:55.707729+0000 mon.a (mon.0) 464 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T21:13:57.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:56 vm10 bash[23387]: audit 2026-03-09T21:13:55.711991+0000 mon.b (mon.1) 13 : audit [INF] from='osd.4 v2:192.168.123.110:6800/4164782911' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:13:57.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:56 vm10 bash[23387]: audit 2026-03-09T21:13:55.711991+0000 mon.b (mon.1) 13 : audit [INF] from='osd.4 v2:192.168.123.110:6800/4164782911' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:13:57.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:56 vm10 bash[23387]: cluster 2026-03-09T21:13:55.712581+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-09T21:13:57.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:56 vm10 bash[23387]: cluster 2026-03-09T21:13:55.712581+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-09T21:13:57.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:56 vm10 bash[23387]: audit 2026-03-09T21:13:55.713592+0000 mon.a (mon.0) 466 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:13:57.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:56 vm10 bash[23387]: audit 2026-03-09T21:13:55.713592+0000 mon.a (mon.0) 466 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:13:57.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:56 vm10 bash[23387]: audit 2026-03-09T21:13:55.713726+0000 mon.a (mon.0) 467 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:13:57.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:56 vm10 bash[23387]: audit 2026-03-09T21:13:55.713726+0000 mon.a (mon.0) 467 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:13:57.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:56 vm10 bash[23387]: audit 2026-03-09T21:13:56.711681+0000 mon.a (mon.0) 468 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-09T21:13:57.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:56 vm10 bash[23387]: audit 2026-03-09T21:13:56.711681+0000 mon.a (mon.0) 468 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-09T21:13:57.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:56 vm10 bash[23387]: cluster 2026-03-09T21:13:56.717886+0000 mon.a (mon.0) 469 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-09T21:13:57.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:56 vm10 bash[23387]: cluster 2026-03-09T21:13:56.717886+0000 mon.a (mon.0) 469 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-09T21:13:57.793 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:57 vm10 bash[23387]: cluster 2026-03-09T21:13:56.606523+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:57.793 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:57 vm10 bash[23387]: cluster 2026-03-09T21:13:56.606523+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:57.793 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:57 vm10 bash[23387]: audit 2026-03-09T21:13:56.718092+0000 mon.a (mon.0) 470 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:13:57.793 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:57 vm10 bash[23387]: audit 2026-03-09T21:13:56.718092+0000 mon.a (mon.0) 470 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:13:57.793 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:57 vm10 bash[23387]: audit 2026-03-09T21:13:56.725882+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:13:57.793 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:57 vm10 bash[23387]: audit 2026-03-09T21:13:56.725882+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:13:57.793 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:57 vm10 bash[23387]: audit 2026-03-09T21:13:57.679137+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:57.793 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:57 vm10 bash[23387]: audit 2026-03-09T21:13:57.679137+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:57.793 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:57 vm10 bash[23387]: audit 2026-03-09T21:13:57.685858+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:57.793 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:57 vm10 bash[23387]: audit 2026-03-09T21:13:57.685858+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:57.793 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:57 vm10 bash[23387]: cluster 2026-03-09T21:13:57.728463+0000 mon.a (mon.0) 474 : cluster [INF] osd.4 v2:192.168.123.110:6800/4164782911 boot 2026-03-09T21:13:57.793 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:57 vm10 bash[23387]: cluster 2026-03-09T21:13:57.728463+0000 mon.a (mon.0) 474 : cluster [INF] osd.4 v2:192.168.123.110:6800/4164782911 boot 2026-03-09T21:13:57.793 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:57 vm10 bash[23387]: cluster 2026-03-09T21:13:57.728506+0000 mon.a (mon.0) 475 : cluster [DBG] osdmap e30: 5 total, 5 up, 5 in 2026-03-09T21:13:57.793 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:57 vm10 bash[23387]: cluster 2026-03-09T21:13:57.728506+0000 mon.a (mon.0) 475 : cluster [DBG] osdmap e30: 5 total, 5 up, 5 in 2026-03-09T21:13:57.793 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:57 vm10 bash[23387]: audit 2026-03-09T21:13:57.728656+0000 mon.a (mon.0) 476 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:13:57.793 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:57 vm10 bash[23387]: audit 2026-03-09T21:13:57.728656+0000 mon.a (mon.0) 476 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:13:58.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:57 vm07 bash[20771]: cluster 2026-03-09T21:13:56.606523+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:58.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:57 vm07 bash[20771]: cluster 2026-03-09T21:13:56.606523+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:58.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:57 vm07 bash[20771]: audit 2026-03-09T21:13:56.718092+0000 mon.a (mon.0) 470 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:13:58.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:57 vm07 bash[20771]: audit 2026-03-09T21:13:56.718092+0000 mon.a (mon.0) 470 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:13:58.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:57 vm07 bash[20771]: audit 2026-03-09T21:13:56.725882+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:13:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:57 vm07 bash[20771]: audit 2026-03-09T21:13:56.725882+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:13:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:57 vm07 bash[20771]: audit 2026-03-09T21:13:57.679137+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:57 vm07 bash[20771]: audit 2026-03-09T21:13:57.679137+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:57 vm07 bash[20771]: audit 2026-03-09T21:13:57.685858+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:57 vm07 bash[20771]: audit 2026-03-09T21:13:57.685858+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:57 vm07 bash[20771]: cluster 2026-03-09T21:13:57.728463+0000 mon.a (mon.0) 474 : cluster [INF] osd.4 v2:192.168.123.110:6800/4164782911 boot 2026-03-09T21:13:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:57 vm07 bash[20771]: cluster 2026-03-09T21:13:57.728463+0000 mon.a (mon.0) 474 : cluster [INF] osd.4 v2:192.168.123.110:6800/4164782911 boot 2026-03-09T21:13:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:57 vm07 bash[20771]: cluster 2026-03-09T21:13:57.728506+0000 mon.a (mon.0) 475 : cluster [DBG] osdmap e30: 5 total, 5 up, 5 in 2026-03-09T21:13:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:57 vm07 bash[20771]: cluster 2026-03-09T21:13:57.728506+0000 mon.a (mon.0) 475 : cluster [DBG] osdmap e30: 5 total, 5 up, 5 in 2026-03-09T21:13:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:57 vm07 bash[20771]: audit 2026-03-09T21:13:57.728656+0000 mon.a (mon.0) 476 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:13:58.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:57 vm07 bash[20771]: audit 2026-03-09T21:13:57.728656+0000 mon.a (mon.0) 476 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:13:58.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:57 vm07 bash[28052]: cluster 2026-03-09T21:13:56.606523+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:58.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:57 vm07 bash[28052]: cluster 2026-03-09T21:13:56.606523+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T21:13:58.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:57 vm07 bash[28052]: audit 2026-03-09T21:13:56.718092+0000 mon.a (mon.0) 470 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:13:58.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:57 vm07 bash[28052]: audit 2026-03-09T21:13:56.718092+0000 mon.a (mon.0) 470 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:13:58.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:57 vm07 bash[28052]: audit 2026-03-09T21:13:56.725882+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:13:58.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:57 vm07 bash[28052]: audit 2026-03-09T21:13:56.725882+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:13:58.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:57 vm07 bash[28052]: audit 2026-03-09T21:13:57.679137+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:58.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:57 vm07 bash[28052]: audit 2026-03-09T21:13:57.679137+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:58.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:57 vm07 bash[28052]: audit 2026-03-09T21:13:57.685858+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:58.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:57 vm07 bash[28052]: audit 2026-03-09T21:13:57.685858+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:58.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:57 vm07 bash[28052]: cluster 2026-03-09T21:13:57.728463+0000 mon.a (mon.0) 474 : cluster [INF] osd.4 v2:192.168.123.110:6800/4164782911 boot 2026-03-09T21:13:58.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:57 vm07 bash[28052]: cluster 2026-03-09T21:13:57.728463+0000 mon.a (mon.0) 474 : cluster [INF] osd.4 v2:192.168.123.110:6800/4164782911 boot 2026-03-09T21:13:58.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:57 vm07 bash[28052]: cluster 2026-03-09T21:13:57.728506+0000 mon.a (mon.0) 475 : cluster [DBG] osdmap e30: 5 total, 5 up, 5 in 2026-03-09T21:13:58.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:57 vm07 bash[28052]: cluster 2026-03-09T21:13:57.728506+0000 mon.a (mon.0) 475 : cluster [DBG] osdmap e30: 5 total, 5 up, 5 in 2026-03-09T21:13:58.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:57 vm07 bash[28052]: audit 2026-03-09T21:13:57.728656+0000 mon.a (mon.0) 476 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:13:58.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:57 vm07 bash[28052]: audit 2026-03-09T21:13:57.728656+0000 mon.a (mon.0) 476 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:13:58.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:58 vm10 bash[23387]: cluster 2026-03-09T21:13:56.136814+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:13:58.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:58 vm10 bash[23387]: cluster 2026-03-09T21:13:56.136814+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:13:58.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:58 vm10 bash[23387]: cluster 2026-03-09T21:13:56.136894+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:13:58.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:58 vm10 bash[23387]: cluster 2026-03-09T21:13:56.136894+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:13:58.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:58 vm10 bash[23387]: audit 2026-03-09T21:13:58.155518+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:58.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:58 vm10 bash[23387]: audit 2026-03-09T21:13:58.155518+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:58.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:58 vm10 bash[23387]: audit 2026-03-09T21:13:58.156158+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:13:58.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:58 vm10 bash[23387]: audit 2026-03-09T21:13:58.156158+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:13:58.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:58 vm10 bash[23387]: audit 2026-03-09T21:13:58.162749+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:58.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:58 vm10 bash[23387]: audit 2026-03-09T21:13:58.162749+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:58.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:58 vm10 bash[23387]: cluster 2026-03-09T21:13:58.728236+0000 mon.a (mon.0) 480 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-09T21:13:58.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:13:58 vm10 bash[23387]: cluster 2026-03-09T21:13:58.728236+0000 mon.a (mon.0) 480 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-09T21:13:59.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:58 vm07 bash[20771]: cluster 2026-03-09T21:13:56.136814+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:13:59.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:58 vm07 bash[20771]: cluster 2026-03-09T21:13:56.136814+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:13:59.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:58 vm07 bash[20771]: cluster 2026-03-09T21:13:56.136894+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:13:59.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:58 vm07 bash[20771]: cluster 2026-03-09T21:13:56.136894+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:13:59.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:58 vm07 bash[20771]: audit 2026-03-09T21:13:58.155518+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:59.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:58 vm07 bash[20771]: audit 2026-03-09T21:13:58.155518+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:59.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:58 vm07 bash[20771]: audit 2026-03-09T21:13:58.156158+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:13:59.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:58 vm07 bash[20771]: audit 2026-03-09T21:13:58.156158+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:13:59.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:58 vm07 bash[20771]: audit 2026-03-09T21:13:58.162749+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:59.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:58 vm07 bash[20771]: audit 2026-03-09T21:13:58.162749+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:59.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:58 vm07 bash[20771]: cluster 2026-03-09T21:13:58.728236+0000 mon.a (mon.0) 480 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-09T21:13:59.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:13:58 vm07 bash[20771]: cluster 2026-03-09T21:13:58.728236+0000 mon.a (mon.0) 480 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-09T21:13:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:58 vm07 bash[28052]: cluster 2026-03-09T21:13:56.136814+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:13:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:58 vm07 bash[28052]: cluster 2026-03-09T21:13:56.136814+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:13:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:58 vm07 bash[28052]: cluster 2026-03-09T21:13:56.136894+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:13:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:58 vm07 bash[28052]: cluster 2026-03-09T21:13:56.136894+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:13:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:58 vm07 bash[28052]: audit 2026-03-09T21:13:58.155518+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:58 vm07 bash[28052]: audit 2026-03-09T21:13:58.155518+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:13:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:58 vm07 bash[28052]: audit 2026-03-09T21:13:58.156158+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:13:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:58 vm07 bash[28052]: audit 2026-03-09T21:13:58.156158+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:13:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:58 vm07 bash[28052]: audit 2026-03-09T21:13:58.162749+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:58 vm07 bash[28052]: audit 2026-03-09T21:13:58.162749+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:13:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:58 vm07 bash[28052]: cluster 2026-03-09T21:13:58.728236+0000 mon.a (mon.0) 480 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-09T21:13:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:13:58 vm07 bash[28052]: cluster 2026-03-09T21:13:58.728236+0000 mon.a (mon.0) 480 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-09T21:13:59.290 INFO:teuthology.orchestra.run.vm10.stdout:Created osd(s) 4 on host 'vm10' 2026-03-09T21:13:59.422 DEBUG:teuthology.orchestra.run.vm10:osd.4> sudo journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@osd.4.service 2026-03-09T21:13:59.423 INFO:tasks.cephadm:Deploying osd.5 on vm10 with /dev/vdd... 2026-03-09T21:13:59.423 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- lvm zap /dev/vdd 2026-03-09T21:14:00.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:00 vm07 bash[20771]: cluster 2026-03-09T21:13:58.606975+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:00.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:00 vm07 bash[20771]: cluster 2026-03-09T21:13:58.606975+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:00.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:00 vm07 bash[20771]: audit 2026-03-09T21:13:59.267520+0000 mon.a (mon.0) 481 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:14:00.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:00 vm07 bash[20771]: audit 2026-03-09T21:13:59.267520+0000 mon.a (mon.0) 481 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:14:00.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:00 vm07 bash[20771]: audit 2026-03-09T21:13:59.275069+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:00.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:00 vm07 bash[20771]: audit 2026-03-09T21:13:59.275069+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:00.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:00 vm07 bash[20771]: audit 2026-03-09T21:13:59.284368+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:00.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:00 vm07 bash[20771]: audit 2026-03-09T21:13:59.284368+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:00.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:00 vm07 bash[20771]: cluster 2026-03-09T21:13:59.731027+0000 mon.a (mon.0) 484 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-09T21:14:00.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:00 vm07 bash[20771]: cluster 2026-03-09T21:13:59.731027+0000 mon.a (mon.0) 484 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-09T21:14:00.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:00 vm07 bash[28052]: cluster 2026-03-09T21:13:58.606975+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:00.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:00 vm07 bash[28052]: cluster 2026-03-09T21:13:58.606975+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:00.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:00 vm07 bash[28052]: audit 2026-03-09T21:13:59.267520+0000 mon.a (mon.0) 481 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:14:00.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:00 vm07 bash[28052]: audit 2026-03-09T21:13:59.267520+0000 mon.a (mon.0) 481 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:14:00.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:00 vm07 bash[28052]: audit 2026-03-09T21:13:59.275069+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:00.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:00 vm07 bash[28052]: audit 2026-03-09T21:13:59.275069+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:00.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:00 vm07 bash[28052]: audit 2026-03-09T21:13:59.284368+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:00.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:00 vm07 bash[28052]: audit 2026-03-09T21:13:59.284368+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:00.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:00 vm07 bash[28052]: cluster 2026-03-09T21:13:59.731027+0000 mon.a (mon.0) 484 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-09T21:14:00.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:00 vm07 bash[28052]: cluster 2026-03-09T21:13:59.731027+0000 mon.a (mon.0) 484 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-09T21:14:00.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:00 vm10 bash[23387]: cluster 2026-03-09T21:13:58.606975+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:00.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:00 vm10 bash[23387]: cluster 2026-03-09T21:13:58.606975+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:00.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:00 vm10 bash[23387]: audit 2026-03-09T21:13:59.267520+0000 mon.a (mon.0) 481 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:14:00.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:00 vm10 bash[23387]: audit 2026-03-09T21:13:59.267520+0000 mon.a (mon.0) 481 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:14:00.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:00 vm10 bash[23387]: audit 2026-03-09T21:13:59.275069+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:00.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:00 vm10 bash[23387]: audit 2026-03-09T21:13:59.275069+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:00.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:00 vm10 bash[23387]: audit 2026-03-09T21:13:59.284368+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:00.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:00 vm10 bash[23387]: audit 2026-03-09T21:13:59.284368+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:00.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:00 vm10 bash[23387]: cluster 2026-03-09T21:13:59.731027+0000 mon.a (mon.0) 484 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-09T21:14:00.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:00 vm10 bash[23387]: cluster 2026-03-09T21:13:59.731027+0000 mon.a (mon.0) 484 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-09T21:14:02.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:02 vm07 bash[20771]: cluster 2026-03-09T21:14:00.607259+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:02.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:02 vm07 bash[20771]: cluster 2026-03-09T21:14:00.607259+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:02.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:02 vm07 bash[28052]: cluster 2026-03-09T21:14:00.607259+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:02.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:02 vm07 bash[28052]: cluster 2026-03-09T21:14:00.607259+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:02.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:02 vm10 bash[23387]: cluster 2026-03-09T21:14:00.607259+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:02.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:02 vm10 bash[23387]: cluster 2026-03-09T21:14:00.607259+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:04.111 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.b/config 2026-03-09T21:14:04.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:04 vm10 bash[23387]: cluster 2026-03-09T21:14:02.607664+0000 mgr.y (mgr.14150) 160 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail; 76 KiB/s, 0 objects/s recovering 2026-03-09T21:14:04.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:04 vm10 bash[23387]: cluster 2026-03-09T21:14:02.607664+0000 mgr.y (mgr.14150) 160 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail; 76 KiB/s, 0 objects/s recovering 2026-03-09T21:14:04.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:04 vm07 bash[20771]: cluster 2026-03-09T21:14:02.607664+0000 mgr.y (mgr.14150) 160 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail; 76 KiB/s, 0 objects/s recovering 2026-03-09T21:14:04.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:04 vm07 bash[20771]: cluster 2026-03-09T21:14:02.607664+0000 mgr.y (mgr.14150) 160 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail; 76 KiB/s, 0 objects/s recovering 2026-03-09T21:14:04.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:04 vm07 bash[28052]: cluster 2026-03-09T21:14:02.607664+0000 mgr.y (mgr.14150) 160 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail; 76 KiB/s, 0 objects/s recovering 2026-03-09T21:14:04.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:04 vm07 bash[28052]: cluster 2026-03-09T21:14:02.607664+0000 mgr.y (mgr.14150) 160 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail; 76 KiB/s, 0 objects/s recovering 2026-03-09T21:14:05.941 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:14:05.958 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph orch daemon add osd vm10:/dev/vdd 2026-03-09T21:14:06.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:06 vm07 bash[20771]: cluster 2026-03-09T21:14:04.607940+0000 mgr.y (mgr.14150) 161 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 65 KiB/s, 0 objects/s recovering 2026-03-09T21:14:06.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:06 vm07 bash[20771]: cluster 2026-03-09T21:14:04.607940+0000 mgr.y (mgr.14150) 161 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 65 KiB/s, 0 objects/s recovering 2026-03-09T21:14:06.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:06 vm07 bash[20771]: cephadm 2026-03-09T21:14:05.190290+0000 mgr.y (mgr.14150) 162 : cephadm [INF] Detected new or changed devices on vm10 2026-03-09T21:14:06.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:06 vm07 bash[20771]: cephadm 2026-03-09T21:14:05.190290+0000 mgr.y (mgr.14150) 162 : cephadm [INF] Detected new or changed devices on vm10 2026-03-09T21:14:06.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:06 vm07 bash[20771]: audit 2026-03-09T21:14:05.197172+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:06.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:06 vm07 bash[20771]: audit 2026-03-09T21:14:05.197172+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:06.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:06 vm07 bash[20771]: audit 2026-03-09T21:14:05.202083+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:06.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:06 vm07 bash[20771]: audit 2026-03-09T21:14:05.202083+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:06.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:06 vm07 bash[20771]: audit 2026-03-09T21:14:05.202941+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:14:06.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:06 vm07 bash[20771]: audit 2026-03-09T21:14:05.202941+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:14:06.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:06 vm07 bash[20771]: cephadm 2026-03-09T21:14:05.203348+0000 mgr.y (mgr.14150) 163 : cephadm [INF] Adjusting osd_memory_target on vm10 to 455.7M 2026-03-09T21:14:06.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:06 vm07 bash[20771]: cephadm 2026-03-09T21:14:05.203348+0000 mgr.y (mgr.14150) 163 : cephadm [INF] Adjusting osd_memory_target on vm10 to 455.7M 2026-03-09T21:14:06.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:06 vm07 bash[20771]: cephadm 2026-03-09T21:14:05.203783+0000 mgr.y (mgr.14150) 164 : cephadm [WRN] Unable to set osd_memory_target on vm10 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T21:14:06.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:06 vm07 bash[20771]: cephadm 2026-03-09T21:14:05.203783+0000 mgr.y (mgr.14150) 164 : cephadm [WRN] Unable to set osd_memory_target on vm10 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T21:14:06.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:06 vm07 bash[20771]: audit 2026-03-09T21:14:05.204168+0000 mon.a (mon.0) 488 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:06.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:06 vm07 bash[20771]: audit 2026-03-09T21:14:05.204168+0000 mon.a (mon.0) 488 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:06.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:06 vm07 bash[20771]: audit 2026-03-09T21:14:05.204609+0000 mon.a (mon.0) 489 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:14:06.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:06 vm07 bash[20771]: audit 2026-03-09T21:14:05.204609+0000 mon.a (mon.0) 489 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:14:06.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:06 vm07 bash[20771]: audit 2026-03-09T21:14:05.209268+0000 mon.a (mon.0) 490 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:06.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:06 vm07 bash[20771]: audit 2026-03-09T21:14:05.209268+0000 mon.a (mon.0) 490 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:06.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:06 vm07 bash[28052]: cluster 2026-03-09T21:14:04.607940+0000 mgr.y (mgr.14150) 161 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 65 KiB/s, 0 objects/s recovering 2026-03-09T21:14:06.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:06 vm07 bash[28052]: cluster 2026-03-09T21:14:04.607940+0000 mgr.y (mgr.14150) 161 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 65 KiB/s, 0 objects/s recovering 2026-03-09T21:14:06.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:06 vm07 bash[28052]: cephadm 2026-03-09T21:14:05.190290+0000 mgr.y (mgr.14150) 162 : cephadm [INF] Detected new or changed devices on vm10 2026-03-09T21:14:06.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:06 vm07 bash[28052]: cephadm 2026-03-09T21:14:05.190290+0000 mgr.y (mgr.14150) 162 : cephadm [INF] Detected new or changed devices on vm10 2026-03-09T21:14:06.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:06 vm07 bash[28052]: audit 2026-03-09T21:14:05.197172+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:06.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:06 vm07 bash[28052]: audit 2026-03-09T21:14:05.197172+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:06.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:06 vm07 bash[28052]: audit 2026-03-09T21:14:05.202083+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:06.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:06 vm07 bash[28052]: audit 2026-03-09T21:14:05.202083+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:06.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:06 vm07 bash[28052]: audit 2026-03-09T21:14:05.202941+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:14:06.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:06 vm07 bash[28052]: audit 2026-03-09T21:14:05.202941+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:14:06.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:06 vm07 bash[28052]: cephadm 2026-03-09T21:14:05.203348+0000 mgr.y (mgr.14150) 163 : cephadm [INF] Adjusting osd_memory_target on vm10 to 455.7M 2026-03-09T21:14:06.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:06 vm07 bash[28052]: cephadm 2026-03-09T21:14:05.203348+0000 mgr.y (mgr.14150) 163 : cephadm [INF] Adjusting osd_memory_target on vm10 to 455.7M 2026-03-09T21:14:06.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:06 vm07 bash[28052]: cephadm 2026-03-09T21:14:05.203783+0000 mgr.y (mgr.14150) 164 : cephadm [WRN] Unable to set osd_memory_target on vm10 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T21:14:06.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:06 vm07 bash[28052]: cephadm 2026-03-09T21:14:05.203783+0000 mgr.y (mgr.14150) 164 : cephadm [WRN] Unable to set osd_memory_target on vm10 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T21:14:06.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:06 vm07 bash[28052]: audit 2026-03-09T21:14:05.204168+0000 mon.a (mon.0) 488 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:06.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:06 vm07 bash[28052]: audit 2026-03-09T21:14:05.204168+0000 mon.a (mon.0) 488 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:06.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:06 vm07 bash[28052]: audit 2026-03-09T21:14:05.204609+0000 mon.a (mon.0) 489 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:14:06.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:06 vm07 bash[28052]: audit 2026-03-09T21:14:05.204609+0000 mon.a (mon.0) 489 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:14:06.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:06 vm07 bash[28052]: audit 2026-03-09T21:14:05.209268+0000 mon.a (mon.0) 490 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:06.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:06 vm07 bash[28052]: audit 2026-03-09T21:14:05.209268+0000 mon.a (mon.0) 490 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:06.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:06 vm10 bash[23387]: cluster 2026-03-09T21:14:04.607940+0000 mgr.y (mgr.14150) 161 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 65 KiB/s, 0 objects/s recovering 2026-03-09T21:14:06.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:06 vm10 bash[23387]: cluster 2026-03-09T21:14:04.607940+0000 mgr.y (mgr.14150) 161 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 65 KiB/s, 0 objects/s recovering 2026-03-09T21:14:06.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:06 vm10 bash[23387]: cephadm 2026-03-09T21:14:05.190290+0000 mgr.y (mgr.14150) 162 : cephadm [INF] Detected new or changed devices on vm10 2026-03-09T21:14:06.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:06 vm10 bash[23387]: cephadm 2026-03-09T21:14:05.190290+0000 mgr.y (mgr.14150) 162 : cephadm [INF] Detected new or changed devices on vm10 2026-03-09T21:14:06.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:06 vm10 bash[23387]: audit 2026-03-09T21:14:05.197172+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:06.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:06 vm10 bash[23387]: audit 2026-03-09T21:14:05.197172+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:06.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:06 vm10 bash[23387]: audit 2026-03-09T21:14:05.202083+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:06.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:06 vm10 bash[23387]: audit 2026-03-09T21:14:05.202083+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:06.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:06 vm10 bash[23387]: audit 2026-03-09T21:14:05.202941+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:14:06.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:06 vm10 bash[23387]: audit 2026-03-09T21:14:05.202941+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:14:06.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:06 vm10 bash[23387]: cephadm 2026-03-09T21:14:05.203348+0000 mgr.y (mgr.14150) 163 : cephadm [INF] Adjusting osd_memory_target on vm10 to 455.7M 2026-03-09T21:14:06.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:06 vm10 bash[23387]: cephadm 2026-03-09T21:14:05.203348+0000 mgr.y (mgr.14150) 163 : cephadm [INF] Adjusting osd_memory_target on vm10 to 455.7M 2026-03-09T21:14:06.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:06 vm10 bash[23387]: cephadm 2026-03-09T21:14:05.203783+0000 mgr.y (mgr.14150) 164 : cephadm [WRN] Unable to set osd_memory_target on vm10 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T21:14:06.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:06 vm10 bash[23387]: cephadm 2026-03-09T21:14:05.203783+0000 mgr.y (mgr.14150) 164 : cephadm [WRN] Unable to set osd_memory_target on vm10 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T21:14:06.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:06 vm10 bash[23387]: audit 2026-03-09T21:14:05.204168+0000 mon.a (mon.0) 488 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:06.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:06 vm10 bash[23387]: audit 2026-03-09T21:14:05.204168+0000 mon.a (mon.0) 488 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:06.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:06 vm10 bash[23387]: audit 2026-03-09T21:14:05.204609+0000 mon.a (mon.0) 489 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:14:06.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:06 vm10 bash[23387]: audit 2026-03-09T21:14:05.204609+0000 mon.a (mon.0) 489 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:14:06.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:06 vm10 bash[23387]: audit 2026-03-09T21:14:05.209268+0000 mon.a (mon.0) 490 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:06.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:06 vm10 bash[23387]: audit 2026-03-09T21:14:05.209268+0000 mon.a (mon.0) 490 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:08.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:08 vm07 bash[20771]: cluster 2026-03-09T21:14:06.608250+0000 mgr.y (mgr.14150) 165 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-09T21:14:08.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:08 vm07 bash[20771]: cluster 2026-03-09T21:14:06.608250+0000 mgr.y (mgr.14150) 165 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-09T21:14:08.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:08 vm07 bash[28052]: cluster 2026-03-09T21:14:06.608250+0000 mgr.y (mgr.14150) 165 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-09T21:14:08.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:08 vm07 bash[28052]: cluster 2026-03-09T21:14:06.608250+0000 mgr.y (mgr.14150) 165 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-09T21:14:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:08 vm10 bash[23387]: cluster 2026-03-09T21:14:06.608250+0000 mgr.y (mgr.14150) 165 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-09T21:14:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:08 vm10 bash[23387]: cluster 2026-03-09T21:14:06.608250+0000 mgr.y (mgr.14150) 165 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-09T21:14:10.590 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.b/config 2026-03-09T21:14:10.611 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:10 vm10 bash[23387]: cluster 2026-03-09T21:14:08.608554+0000 mgr.y (mgr.14150) 166 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-09T21:14:10.611 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:10 vm10 bash[23387]: cluster 2026-03-09T21:14:08.608554+0000 mgr.y (mgr.14150) 166 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-09T21:14:10.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:10 vm07 bash[28052]: cluster 2026-03-09T21:14:08.608554+0000 mgr.y (mgr.14150) 166 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-09T21:14:10.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:10 vm07 bash[28052]: cluster 2026-03-09T21:14:08.608554+0000 mgr.y (mgr.14150) 166 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-09T21:14:10.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:10 vm07 bash[20771]: cluster 2026-03-09T21:14:08.608554+0000 mgr.y (mgr.14150) 166 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-09T21:14:10.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:10 vm07 bash[20771]: cluster 2026-03-09T21:14:08.608554+0000 mgr.y (mgr.14150) 166 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-09T21:14:11.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:11 vm07 bash[28052]: audit 2026-03-09T21:14:10.893522+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:14:11.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:11 vm07 bash[28052]: audit 2026-03-09T21:14:10.893522+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:14:11.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:11 vm07 bash[28052]: audit 2026-03-09T21:14:10.895177+0000 mon.a (mon.0) 492 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:14:11.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:11 vm07 bash[28052]: audit 2026-03-09T21:14:10.895177+0000 mon.a (mon.0) 492 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:14:11.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:11 vm07 bash[28052]: audit 2026-03-09T21:14:10.895652+0000 mon.a (mon.0) 493 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:11.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:11 vm07 bash[28052]: audit 2026-03-09T21:14:10.895652+0000 mon.a (mon.0) 493 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:11.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:11 vm07 bash[20771]: audit 2026-03-09T21:14:10.893522+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:14:11.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:11 vm07 bash[20771]: audit 2026-03-09T21:14:10.893522+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:14:11.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:11 vm07 bash[20771]: audit 2026-03-09T21:14:10.895177+0000 mon.a (mon.0) 492 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:14:11.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:11 vm07 bash[20771]: audit 2026-03-09T21:14:10.895177+0000 mon.a (mon.0) 492 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:14:11.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:11 vm07 bash[20771]: audit 2026-03-09T21:14:10.895652+0000 mon.a (mon.0) 493 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:11.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:11 vm07 bash[20771]: audit 2026-03-09T21:14:10.895652+0000 mon.a (mon.0) 493 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:11.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:11 vm10 bash[23387]: audit 2026-03-09T21:14:10.893522+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:14:11.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:11 vm10 bash[23387]: audit 2026-03-09T21:14:10.893522+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:14:11.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:11 vm10 bash[23387]: audit 2026-03-09T21:14:10.895177+0000 mon.a (mon.0) 492 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:14:11.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:11 vm10 bash[23387]: audit 2026-03-09T21:14:10.895177+0000 mon.a (mon.0) 492 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:14:11.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:11 vm10 bash[23387]: audit 2026-03-09T21:14:10.895652+0000 mon.a (mon.0) 493 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:11.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:11 vm10 bash[23387]: audit 2026-03-09T21:14:10.895652+0000 mon.a (mon.0) 493 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:12 vm07 bash[28052]: cluster 2026-03-09T21:14:10.608886+0000 mgr.y (mgr.14150) 167 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 41 KiB/s, 0 objects/s recovering 2026-03-09T21:14:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:12 vm07 bash[28052]: cluster 2026-03-09T21:14:10.608886+0000 mgr.y (mgr.14150) 167 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 41 KiB/s, 0 objects/s recovering 2026-03-09T21:14:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:12 vm07 bash[28052]: audit 2026-03-09T21:14:10.891909+0000 mgr.y (mgr.14150) 168 : audit [DBG] from='client.24223 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:14:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:12 vm07 bash[28052]: audit 2026-03-09T21:14:10.891909+0000 mgr.y (mgr.14150) 168 : audit [DBG] from='client.24223 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:14:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:12 vm07 bash[20771]: cluster 2026-03-09T21:14:10.608886+0000 mgr.y (mgr.14150) 167 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 41 KiB/s, 0 objects/s recovering 2026-03-09T21:14:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:12 vm07 bash[20771]: cluster 2026-03-09T21:14:10.608886+0000 mgr.y (mgr.14150) 167 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 41 KiB/s, 0 objects/s recovering 2026-03-09T21:14:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:12 vm07 bash[20771]: audit 2026-03-09T21:14:10.891909+0000 mgr.y (mgr.14150) 168 : audit [DBG] from='client.24223 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:14:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:12 vm07 bash[20771]: audit 2026-03-09T21:14:10.891909+0000 mgr.y (mgr.14150) 168 : audit [DBG] from='client.24223 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:14:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:12 vm10 bash[23387]: cluster 2026-03-09T21:14:10.608886+0000 mgr.y (mgr.14150) 167 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 41 KiB/s, 0 objects/s recovering 2026-03-09T21:14:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:12 vm10 bash[23387]: cluster 2026-03-09T21:14:10.608886+0000 mgr.y (mgr.14150) 167 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 41 KiB/s, 0 objects/s recovering 2026-03-09T21:14:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:12 vm10 bash[23387]: audit 2026-03-09T21:14:10.891909+0000 mgr.y (mgr.14150) 168 : audit [DBG] from='client.24223 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:14:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:12 vm10 bash[23387]: audit 2026-03-09T21:14:10.891909+0000 mgr.y (mgr.14150) 168 : audit [DBG] from='client.24223 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:14:14.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:14 vm07 bash[28052]: cluster 2026-03-09T21:14:12.609174+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T21:14:14.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:14 vm07 bash[28052]: cluster 2026-03-09T21:14:12.609174+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T21:14:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:14 vm07 bash[20771]: cluster 2026-03-09T21:14:12.609174+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T21:14:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:14 vm07 bash[20771]: cluster 2026-03-09T21:14:12.609174+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T21:14:14.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:14 vm10 bash[23387]: cluster 2026-03-09T21:14:12.609174+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T21:14:14.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:14 vm10 bash[23387]: cluster 2026-03-09T21:14:12.609174+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T21:14:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:16 vm07 bash[28052]: cluster 2026-03-09T21:14:14.609515+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:16 vm07 bash[28052]: cluster 2026-03-09T21:14:14.609515+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:16 vm07 bash[20771]: cluster 2026-03-09T21:14:14.609515+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:16 vm07 bash[20771]: cluster 2026-03-09T21:14:14.609515+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:16 vm10 bash[23387]: cluster 2026-03-09T21:14:14.609515+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:16 vm10 bash[23387]: cluster 2026-03-09T21:14:14.609515+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:17.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:17 vm07 bash[28052]: audit 2026-03-09T21:14:16.664834+0000 mon.b (mon.1) 14 : audit [INF] from='client.? 192.168.123.110:0/285338455' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "94d2c197-ad39-4db0-9389-4183a78f1d0a"}]: dispatch 2026-03-09T21:14:17.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:17 vm07 bash[28052]: audit 2026-03-09T21:14:16.664834+0000 mon.b (mon.1) 14 : audit [INF] from='client.? 192.168.123.110:0/285338455' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "94d2c197-ad39-4db0-9389-4183a78f1d0a"}]: dispatch 2026-03-09T21:14:17.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:17 vm07 bash[28052]: audit 2026-03-09T21:14:16.665504+0000 mon.a (mon.0) 494 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "94d2c197-ad39-4db0-9389-4183a78f1d0a"}]: dispatch 2026-03-09T21:14:17.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:17 vm07 bash[28052]: audit 2026-03-09T21:14:16.665504+0000 mon.a (mon.0) 494 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "94d2c197-ad39-4db0-9389-4183a78f1d0a"}]: dispatch 2026-03-09T21:14:17.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:17 vm07 bash[28052]: audit 2026-03-09T21:14:16.670049+0000 mon.a (mon.0) 495 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "94d2c197-ad39-4db0-9389-4183a78f1d0a"}]': finished 2026-03-09T21:14:17.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:17 vm07 bash[28052]: audit 2026-03-09T21:14:16.670049+0000 mon.a (mon.0) 495 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "94d2c197-ad39-4db0-9389-4183a78f1d0a"}]': finished 2026-03-09T21:14:17.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:17 vm07 bash[28052]: cluster 2026-03-09T21:14:16.675145+0000 mon.a (mon.0) 496 : cluster [DBG] osdmap e33: 6 total, 5 up, 6 in 2026-03-09T21:14:17.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:17 vm07 bash[28052]: cluster 2026-03-09T21:14:16.675145+0000 mon.a (mon.0) 496 : cluster [DBG] osdmap e33: 6 total, 5 up, 6 in 2026-03-09T21:14:17.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:17 vm07 bash[28052]: audit 2026-03-09T21:14:16.675525+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:14:17.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:17 vm07 bash[28052]: audit 2026-03-09T21:14:16.675525+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:14:17.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:17 vm07 bash[20771]: audit 2026-03-09T21:14:16.664834+0000 mon.b (mon.1) 14 : audit [INF] from='client.? 192.168.123.110:0/285338455' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "94d2c197-ad39-4db0-9389-4183a78f1d0a"}]: dispatch 2026-03-09T21:14:17.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:17 vm07 bash[20771]: audit 2026-03-09T21:14:16.664834+0000 mon.b (mon.1) 14 : audit [INF] from='client.? 192.168.123.110:0/285338455' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "94d2c197-ad39-4db0-9389-4183a78f1d0a"}]: dispatch 2026-03-09T21:14:17.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:17 vm07 bash[20771]: audit 2026-03-09T21:14:16.665504+0000 mon.a (mon.0) 494 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "94d2c197-ad39-4db0-9389-4183a78f1d0a"}]: dispatch 2026-03-09T21:14:17.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:17 vm07 bash[20771]: audit 2026-03-09T21:14:16.665504+0000 mon.a (mon.0) 494 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "94d2c197-ad39-4db0-9389-4183a78f1d0a"}]: dispatch 2026-03-09T21:14:17.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:17 vm07 bash[20771]: audit 2026-03-09T21:14:16.670049+0000 mon.a (mon.0) 495 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "94d2c197-ad39-4db0-9389-4183a78f1d0a"}]': finished 2026-03-09T21:14:17.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:17 vm07 bash[20771]: audit 2026-03-09T21:14:16.670049+0000 mon.a (mon.0) 495 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "94d2c197-ad39-4db0-9389-4183a78f1d0a"}]': finished 2026-03-09T21:14:17.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:17 vm07 bash[20771]: cluster 2026-03-09T21:14:16.675145+0000 mon.a (mon.0) 496 : cluster [DBG] osdmap e33: 6 total, 5 up, 6 in 2026-03-09T21:14:17.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:17 vm07 bash[20771]: cluster 2026-03-09T21:14:16.675145+0000 mon.a (mon.0) 496 : cluster [DBG] osdmap e33: 6 total, 5 up, 6 in 2026-03-09T21:14:17.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:17 vm07 bash[20771]: audit 2026-03-09T21:14:16.675525+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:14:17.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:17 vm07 bash[20771]: audit 2026-03-09T21:14:16.675525+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:14:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:17 vm10 bash[23387]: audit 2026-03-09T21:14:16.664834+0000 mon.b (mon.1) 14 : audit [INF] from='client.? 192.168.123.110:0/285338455' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "94d2c197-ad39-4db0-9389-4183a78f1d0a"}]: dispatch 2026-03-09T21:14:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:17 vm10 bash[23387]: audit 2026-03-09T21:14:16.664834+0000 mon.b (mon.1) 14 : audit [INF] from='client.? 192.168.123.110:0/285338455' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "94d2c197-ad39-4db0-9389-4183a78f1d0a"}]: dispatch 2026-03-09T21:14:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:17 vm10 bash[23387]: audit 2026-03-09T21:14:16.665504+0000 mon.a (mon.0) 494 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "94d2c197-ad39-4db0-9389-4183a78f1d0a"}]: dispatch 2026-03-09T21:14:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:17 vm10 bash[23387]: audit 2026-03-09T21:14:16.665504+0000 mon.a (mon.0) 494 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "94d2c197-ad39-4db0-9389-4183a78f1d0a"}]: dispatch 2026-03-09T21:14:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:17 vm10 bash[23387]: audit 2026-03-09T21:14:16.670049+0000 mon.a (mon.0) 495 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "94d2c197-ad39-4db0-9389-4183a78f1d0a"}]': finished 2026-03-09T21:14:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:17 vm10 bash[23387]: audit 2026-03-09T21:14:16.670049+0000 mon.a (mon.0) 495 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "94d2c197-ad39-4db0-9389-4183a78f1d0a"}]': finished 2026-03-09T21:14:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:17 vm10 bash[23387]: cluster 2026-03-09T21:14:16.675145+0000 mon.a (mon.0) 496 : cluster [DBG] osdmap e33: 6 total, 5 up, 6 in 2026-03-09T21:14:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:17 vm10 bash[23387]: cluster 2026-03-09T21:14:16.675145+0000 mon.a (mon.0) 496 : cluster [DBG] osdmap e33: 6 total, 5 up, 6 in 2026-03-09T21:14:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:17 vm10 bash[23387]: audit 2026-03-09T21:14:16.675525+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:14:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:17 vm10 bash[23387]: audit 2026-03-09T21:14:16.675525+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:14:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:18 vm07 bash[20771]: cluster 2026-03-09T21:14:16.609836+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:18 vm07 bash[20771]: cluster 2026-03-09T21:14:16.609836+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:18.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:18 vm07 bash[20771]: audit 2026-03-09T21:14:17.376169+0000 mon.b (mon.1) 15 : audit [DBG] from='client.? 192.168.123.110:0/3315752682' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:14:18.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:18 vm07 bash[20771]: audit 2026-03-09T21:14:17.376169+0000 mon.b (mon.1) 15 : audit [DBG] from='client.? 192.168.123.110:0/3315752682' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:14:18.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:18 vm07 bash[28052]: cluster 2026-03-09T21:14:16.609836+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:18.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:18 vm07 bash[28052]: cluster 2026-03-09T21:14:16.609836+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:18.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:18 vm07 bash[28052]: audit 2026-03-09T21:14:17.376169+0000 mon.b (mon.1) 15 : audit [DBG] from='client.? 192.168.123.110:0/3315752682' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:14:18.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:18 vm07 bash[28052]: audit 2026-03-09T21:14:17.376169+0000 mon.b (mon.1) 15 : audit [DBG] from='client.? 192.168.123.110:0/3315752682' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:14:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:18 vm10 bash[23387]: cluster 2026-03-09T21:14:16.609836+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:18 vm10 bash[23387]: cluster 2026-03-09T21:14:16.609836+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:18 vm10 bash[23387]: audit 2026-03-09T21:14:17.376169+0000 mon.b (mon.1) 15 : audit [DBG] from='client.? 192.168.123.110:0/3315752682' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:14:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:18 vm10 bash[23387]: audit 2026-03-09T21:14:17.376169+0000 mon.b (mon.1) 15 : audit [DBG] from='client.? 192.168.123.110:0/3315752682' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:14:20.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:20 vm07 bash[28052]: cluster 2026-03-09T21:14:18.610264+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:20.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:20 vm07 bash[28052]: cluster 2026-03-09T21:14:18.610264+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:20.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:20 vm07 bash[20771]: cluster 2026-03-09T21:14:18.610264+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:20.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:20 vm07 bash[20771]: cluster 2026-03-09T21:14:18.610264+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:20.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:20 vm10 bash[23387]: cluster 2026-03-09T21:14:18.610264+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:20.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:20 vm10 bash[23387]: cluster 2026-03-09T21:14:18.610264+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:22.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:22 vm07 bash[28052]: cluster 2026-03-09T21:14:20.610650+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:22.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:22 vm07 bash[28052]: cluster 2026-03-09T21:14:20.610650+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:22.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:22 vm07 bash[20771]: cluster 2026-03-09T21:14:20.610650+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:22.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:22 vm07 bash[20771]: cluster 2026-03-09T21:14:20.610650+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:22.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:22 vm10 bash[23387]: cluster 2026-03-09T21:14:20.610650+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:22.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:22 vm10 bash[23387]: cluster 2026-03-09T21:14:20.610650+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:24 vm10 bash[23387]: cluster 2026-03-09T21:14:22.611030+0000 mgr.y (mgr.14150) 174 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:24 vm10 bash[23387]: cluster 2026-03-09T21:14:22.611030+0000 mgr.y (mgr.14150) 174 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:24.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:24 vm07 bash[20771]: cluster 2026-03-09T21:14:22.611030+0000 mgr.y (mgr.14150) 174 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:24.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:24 vm07 bash[20771]: cluster 2026-03-09T21:14:22.611030+0000 mgr.y (mgr.14150) 174 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:24.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:24 vm07 bash[28052]: cluster 2026-03-09T21:14:22.611030+0000 mgr.y (mgr.14150) 174 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:24.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:24 vm07 bash[28052]: cluster 2026-03-09T21:14:22.611030+0000 mgr.y (mgr.14150) 174 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:26.435 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:26 vm10 bash[23387]: cluster 2026-03-09T21:14:24.611443+0000 mgr.y (mgr.14150) 175 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:26.435 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:26 vm10 bash[23387]: cluster 2026-03-09T21:14:24.611443+0000 mgr.y (mgr.14150) 175 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:26.435 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:26 vm10 bash[23387]: audit 2026-03-09T21:14:25.845363+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T21:14:26.435 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:26 vm10 bash[23387]: audit 2026-03-09T21:14:25.845363+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T21:14:26.435 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:26 vm10 bash[23387]: audit 2026-03-09T21:14:25.846051+0000 mon.a (mon.0) 499 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:26.435 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:26 vm10 bash[23387]: audit 2026-03-09T21:14:25.846051+0000 mon.a (mon.0) 499 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:26 vm07 bash[28052]: cluster 2026-03-09T21:14:24.611443+0000 mgr.y (mgr.14150) 175 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:26 vm07 bash[28052]: cluster 2026-03-09T21:14:24.611443+0000 mgr.y (mgr.14150) 175 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:26 vm07 bash[28052]: audit 2026-03-09T21:14:25.845363+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T21:14:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:26 vm07 bash[28052]: audit 2026-03-09T21:14:25.845363+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T21:14:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:26 vm07 bash[28052]: audit 2026-03-09T21:14:25.846051+0000 mon.a (mon.0) 499 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:26 vm07 bash[28052]: audit 2026-03-09T21:14:25.846051+0000 mon.a (mon.0) 499 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:26.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:26 vm07 bash[20771]: cluster 2026-03-09T21:14:24.611443+0000 mgr.y (mgr.14150) 175 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:26.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:26 vm07 bash[20771]: cluster 2026-03-09T21:14:24.611443+0000 mgr.y (mgr.14150) 175 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:26.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:26 vm07 bash[20771]: audit 2026-03-09T21:14:25.845363+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T21:14:26.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:26 vm07 bash[20771]: audit 2026-03-09T21:14:25.845363+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T21:14:26.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:26 vm07 bash[20771]: audit 2026-03-09T21:14:25.846051+0000 mon.a (mon.0) 499 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:26.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:26 vm07 bash[20771]: audit 2026-03-09T21:14:25.846051+0000 mon.a (mon.0) 499 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:27.096 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:26 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:14:27.096 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:14:26 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:14:27.097 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:14:26 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:14:27.365 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:14:27 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:14:27.366 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:27 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:14:27.366 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:14:27 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:14:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:27 vm10 bash[23387]: cephadm 2026-03-09T21:14:25.846553+0000 mgr.y (mgr.14150) 176 : cephadm [INF] Deploying daemon osd.5 on vm10 2026-03-09T21:14:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:27 vm10 bash[23387]: cephadm 2026-03-09T21:14:25.846553+0000 mgr.y (mgr.14150) 176 : cephadm [INF] Deploying daemon osd.5 on vm10 2026-03-09T21:14:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:27 vm10 bash[23387]: audit 2026-03-09T21:14:27.236585+0000 mon.a (mon.0) 500 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:14:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:27 vm10 bash[23387]: audit 2026-03-09T21:14:27.236585+0000 mon.a (mon.0) 500 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:14:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:27 vm10 bash[23387]: audit 2026-03-09T21:14:27.244385+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:27 vm10 bash[23387]: audit 2026-03-09T21:14:27.244385+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:27 vm10 bash[23387]: audit 2026-03-09T21:14:27.257148+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:27 vm10 bash[23387]: audit 2026-03-09T21:14:27.257148+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:27.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:27 vm07 bash[28052]: cephadm 2026-03-09T21:14:25.846553+0000 mgr.y (mgr.14150) 176 : cephadm [INF] Deploying daemon osd.5 on vm10 2026-03-09T21:14:27.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:27 vm07 bash[28052]: cephadm 2026-03-09T21:14:25.846553+0000 mgr.y (mgr.14150) 176 : cephadm [INF] Deploying daemon osd.5 on vm10 2026-03-09T21:14:27.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:27 vm07 bash[28052]: audit 2026-03-09T21:14:27.236585+0000 mon.a (mon.0) 500 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:14:27.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:27 vm07 bash[28052]: audit 2026-03-09T21:14:27.236585+0000 mon.a (mon.0) 500 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:14:27.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:27 vm07 bash[28052]: audit 2026-03-09T21:14:27.244385+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:27.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:27 vm07 bash[28052]: audit 2026-03-09T21:14:27.244385+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:27.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:27 vm07 bash[28052]: audit 2026-03-09T21:14:27.257148+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:27.869 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:27 vm07 bash[28052]: audit 2026-03-09T21:14:27.257148+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:27.869 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:27 vm07 bash[20771]: cephadm 2026-03-09T21:14:25.846553+0000 mgr.y (mgr.14150) 176 : cephadm [INF] Deploying daemon osd.5 on vm10 2026-03-09T21:14:27.869 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:27 vm07 bash[20771]: cephadm 2026-03-09T21:14:25.846553+0000 mgr.y (mgr.14150) 176 : cephadm [INF] Deploying daemon osd.5 on vm10 2026-03-09T21:14:27.869 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:27 vm07 bash[20771]: audit 2026-03-09T21:14:27.236585+0000 mon.a (mon.0) 500 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:14:27.869 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:27 vm07 bash[20771]: audit 2026-03-09T21:14:27.236585+0000 mon.a (mon.0) 500 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:14:27.869 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:27 vm07 bash[20771]: audit 2026-03-09T21:14:27.244385+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:27.869 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:27 vm07 bash[20771]: audit 2026-03-09T21:14:27.244385+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:27.869 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:27 vm07 bash[20771]: audit 2026-03-09T21:14:27.257148+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:27.870 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:27 vm07 bash[20771]: audit 2026-03-09T21:14:27.257148+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:28.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:28 vm10 bash[23387]: cluster 2026-03-09T21:14:26.611771+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:28.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:28 vm10 bash[23387]: cluster 2026-03-09T21:14:26.611771+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:28.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:28 vm07 bash[28052]: cluster 2026-03-09T21:14:26.611771+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:28.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:28 vm07 bash[28052]: cluster 2026-03-09T21:14:26.611771+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:28.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:28 vm07 bash[20771]: cluster 2026-03-09T21:14:26.611771+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:28.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:28 vm07 bash[20771]: cluster 2026-03-09T21:14:26.611771+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:30.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:30 vm10 bash[23387]: cluster 2026-03-09T21:14:28.612198+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:30.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:30 vm10 bash[23387]: cluster 2026-03-09T21:14:28.612198+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:30.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:30 vm07 bash[28052]: cluster 2026-03-09T21:14:28.612198+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:30.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:30 vm07 bash[28052]: cluster 2026-03-09T21:14:28.612198+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:30.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:30 vm07 bash[20771]: cluster 2026-03-09T21:14:28.612198+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:30.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:30 vm07 bash[20771]: cluster 2026-03-09T21:14:28.612198+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:31.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:31 vm10 bash[23387]: audit 2026-03-09T21:14:31.032786+0000 mon.b (mon.1) 16 : audit [INF] from='osd.5 v2:192.168.123.110:6804/1216077544' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T21:14:31.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:31 vm10 bash[23387]: audit 2026-03-09T21:14:31.032786+0000 mon.b (mon.1) 16 : audit [INF] from='osd.5 v2:192.168.123.110:6804/1216077544' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T21:14:31.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:31 vm10 bash[23387]: audit 2026-03-09T21:14:31.033527+0000 mon.a (mon.0) 503 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T21:14:31.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:31 vm10 bash[23387]: audit 2026-03-09T21:14:31.033527+0000 mon.a (mon.0) 503 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T21:14:31.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:31 vm07 bash[28052]: audit 2026-03-09T21:14:31.032786+0000 mon.b (mon.1) 16 : audit [INF] from='osd.5 v2:192.168.123.110:6804/1216077544' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T21:14:31.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:31 vm07 bash[28052]: audit 2026-03-09T21:14:31.032786+0000 mon.b (mon.1) 16 : audit [INF] from='osd.5 v2:192.168.123.110:6804/1216077544' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T21:14:31.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:31 vm07 bash[28052]: audit 2026-03-09T21:14:31.033527+0000 mon.a (mon.0) 503 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T21:14:31.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:31 vm07 bash[28052]: audit 2026-03-09T21:14:31.033527+0000 mon.a (mon.0) 503 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T21:14:31.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:31 vm07 bash[20771]: audit 2026-03-09T21:14:31.032786+0000 mon.b (mon.1) 16 : audit [INF] from='osd.5 v2:192.168.123.110:6804/1216077544' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T21:14:31.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:31 vm07 bash[20771]: audit 2026-03-09T21:14:31.032786+0000 mon.b (mon.1) 16 : audit [INF] from='osd.5 v2:192.168.123.110:6804/1216077544' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T21:14:31.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:31 vm07 bash[20771]: audit 2026-03-09T21:14:31.033527+0000 mon.a (mon.0) 503 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T21:14:31.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:31 vm07 bash[20771]: audit 2026-03-09T21:14:31.033527+0000 mon.a (mon.0) 503 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T21:14:32.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:32 vm10 bash[23387]: cluster 2026-03-09T21:14:30.612544+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:32.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:32 vm10 bash[23387]: cluster 2026-03-09T21:14:30.612544+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:32.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:32 vm10 bash[23387]: audit 2026-03-09T21:14:31.409792+0000 mon.a (mon.0) 504 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T21:14:32.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:32 vm10 bash[23387]: audit 2026-03-09T21:14:31.409792+0000 mon.a (mon.0) 504 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T21:14:32.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:32 vm10 bash[23387]: audit 2026-03-09T21:14:31.413726+0000 mon.b (mon.1) 17 : audit [INF] from='osd.5 v2:192.168.123.110:6804/1216077544' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:14:32.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:32 vm10 bash[23387]: audit 2026-03-09T21:14:31.413726+0000 mon.b (mon.1) 17 : audit [INF] from='osd.5 v2:192.168.123.110:6804/1216077544' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:14:32.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:32 vm10 bash[23387]: cluster 2026-03-09T21:14:31.416452+0000 mon.a (mon.0) 505 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-09T21:14:32.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:32 vm10 bash[23387]: cluster 2026-03-09T21:14:31.416452+0000 mon.a (mon.0) 505 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-09T21:14:32.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:32 vm10 bash[23387]: audit 2026-03-09T21:14:31.417641+0000 mon.a (mon.0) 506 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:14:32.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:32 vm10 bash[23387]: audit 2026-03-09T21:14:31.417641+0000 mon.a (mon.0) 506 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:14:32.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:32 vm10 bash[23387]: audit 2026-03-09T21:14:31.417744+0000 mon.a (mon.0) 507 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:14:32.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:32 vm10 bash[23387]: audit 2026-03-09T21:14:31.417744+0000 mon.a (mon.0) 507 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:14:32.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:32 vm10 bash[23387]: audit 2026-03-09T21:14:32.417955+0000 mon.a (mon.0) 508 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-09T21:14:32.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:32 vm10 bash[23387]: audit 2026-03-09T21:14:32.417955+0000 mon.a (mon.0) 508 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-09T21:14:32.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:32 vm10 bash[23387]: cluster 2026-03-09T21:14:32.425865+0000 mon.a (mon.0) 509 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-09T21:14:32.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:32 vm10 bash[23387]: cluster 2026-03-09T21:14:32.425865+0000 mon.a (mon.0) 509 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-09T21:14:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:32 vm07 bash[20771]: cluster 2026-03-09T21:14:30.612544+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:32 vm07 bash[20771]: cluster 2026-03-09T21:14:30.612544+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:32 vm07 bash[20771]: audit 2026-03-09T21:14:31.409792+0000 mon.a (mon.0) 504 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T21:14:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:32 vm07 bash[20771]: audit 2026-03-09T21:14:31.409792+0000 mon.a (mon.0) 504 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T21:14:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:32 vm07 bash[20771]: audit 2026-03-09T21:14:31.413726+0000 mon.b (mon.1) 17 : audit [INF] from='osd.5 v2:192.168.123.110:6804/1216077544' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:14:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:32 vm07 bash[20771]: audit 2026-03-09T21:14:31.413726+0000 mon.b (mon.1) 17 : audit [INF] from='osd.5 v2:192.168.123.110:6804/1216077544' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:14:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:32 vm07 bash[20771]: cluster 2026-03-09T21:14:31.416452+0000 mon.a (mon.0) 505 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-09T21:14:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:32 vm07 bash[20771]: cluster 2026-03-09T21:14:31.416452+0000 mon.a (mon.0) 505 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-09T21:14:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:32 vm07 bash[20771]: audit 2026-03-09T21:14:31.417641+0000 mon.a (mon.0) 506 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:14:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:32 vm07 bash[20771]: audit 2026-03-09T21:14:31.417641+0000 mon.a (mon.0) 506 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:14:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:32 vm07 bash[20771]: audit 2026-03-09T21:14:31.417744+0000 mon.a (mon.0) 507 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:14:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:32 vm07 bash[20771]: audit 2026-03-09T21:14:31.417744+0000 mon.a (mon.0) 507 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:14:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:32 vm07 bash[20771]: audit 2026-03-09T21:14:32.417955+0000 mon.a (mon.0) 508 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-09T21:14:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:32 vm07 bash[20771]: audit 2026-03-09T21:14:32.417955+0000 mon.a (mon.0) 508 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-09T21:14:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:32 vm07 bash[20771]: cluster 2026-03-09T21:14:32.425865+0000 mon.a (mon.0) 509 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-09T21:14:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:32 vm07 bash[20771]: cluster 2026-03-09T21:14:32.425865+0000 mon.a (mon.0) 509 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-09T21:14:32.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:32 vm07 bash[28052]: cluster 2026-03-09T21:14:30.612544+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:32.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:32 vm07 bash[28052]: cluster 2026-03-09T21:14:30.612544+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:32.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:32 vm07 bash[28052]: audit 2026-03-09T21:14:31.409792+0000 mon.a (mon.0) 504 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T21:14:32.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:32 vm07 bash[28052]: audit 2026-03-09T21:14:31.409792+0000 mon.a (mon.0) 504 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T21:14:32.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:32 vm07 bash[28052]: audit 2026-03-09T21:14:31.413726+0000 mon.b (mon.1) 17 : audit [INF] from='osd.5 v2:192.168.123.110:6804/1216077544' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:14:32.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:32 vm07 bash[28052]: audit 2026-03-09T21:14:31.413726+0000 mon.b (mon.1) 17 : audit [INF] from='osd.5 v2:192.168.123.110:6804/1216077544' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:14:32.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:32 vm07 bash[28052]: cluster 2026-03-09T21:14:31.416452+0000 mon.a (mon.0) 505 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-09T21:14:32.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:32 vm07 bash[28052]: cluster 2026-03-09T21:14:31.416452+0000 mon.a (mon.0) 505 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-09T21:14:32.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:32 vm07 bash[28052]: audit 2026-03-09T21:14:31.417641+0000 mon.a (mon.0) 506 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:14:32.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:32 vm07 bash[28052]: audit 2026-03-09T21:14:31.417641+0000 mon.a (mon.0) 506 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:14:32.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:32 vm07 bash[28052]: audit 2026-03-09T21:14:31.417744+0000 mon.a (mon.0) 507 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:14:32.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:32 vm07 bash[28052]: audit 2026-03-09T21:14:31.417744+0000 mon.a (mon.0) 507 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:14:32.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:32 vm07 bash[28052]: audit 2026-03-09T21:14:32.417955+0000 mon.a (mon.0) 508 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-09T21:14:32.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:32 vm07 bash[28052]: audit 2026-03-09T21:14:32.417955+0000 mon.a (mon.0) 508 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-09T21:14:32.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:32 vm07 bash[28052]: cluster 2026-03-09T21:14:32.425865+0000 mon.a (mon.0) 509 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-09T21:14:32.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:32 vm07 bash[28052]: cluster 2026-03-09T21:14:32.425865+0000 mon.a (mon.0) 509 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-09T21:14:33.609 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:33 vm10 bash[23387]: audit 2026-03-09T21:14:32.426815+0000 mon.a (mon.0) 510 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:14:33.609 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:33 vm10 bash[23387]: audit 2026-03-09T21:14:32.426815+0000 mon.a (mon.0) 510 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:14:33.609 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:33 vm10 bash[23387]: audit 2026-03-09T21:14:33.426201+0000 mon.a (mon.0) 511 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:14:33.609 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:33 vm10 bash[23387]: audit 2026-03-09T21:14:33.426201+0000 mon.a (mon.0) 511 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:14:33.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:33 vm07 bash[20771]: audit 2026-03-09T21:14:32.426815+0000 mon.a (mon.0) 510 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:14:33.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:33 vm07 bash[20771]: audit 2026-03-09T21:14:32.426815+0000 mon.a (mon.0) 510 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:14:33.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:33 vm07 bash[20771]: audit 2026-03-09T21:14:33.426201+0000 mon.a (mon.0) 511 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:14:33.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:33 vm07 bash[20771]: audit 2026-03-09T21:14:33.426201+0000 mon.a (mon.0) 511 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:14:33.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:33 vm07 bash[28052]: audit 2026-03-09T21:14:32.426815+0000 mon.a (mon.0) 510 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:14:33.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:33 vm07 bash[28052]: audit 2026-03-09T21:14:32.426815+0000 mon.a (mon.0) 510 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:14:33.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:33 vm07 bash[28052]: audit 2026-03-09T21:14:33.426201+0000 mon.a (mon.0) 511 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:14:33.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:33 vm07 bash[28052]: audit 2026-03-09T21:14:33.426201+0000 mon.a (mon.0) 511 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:14:34.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:34 vm10 bash[23387]: cluster 2026-03-09T21:14:32.010533+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:14:34.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:34 vm10 bash[23387]: cluster 2026-03-09T21:14:32.010533+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:14:34.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:34 vm10 bash[23387]: cluster 2026-03-09T21:14:32.010599+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:14:34.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:34 vm10 bash[23387]: cluster 2026-03-09T21:14:32.010599+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:14:34.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:34 vm10 bash[23387]: cluster 2026-03-09T21:14:32.612936+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:34.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:34 vm10 bash[23387]: cluster 2026-03-09T21:14:32.612936+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:34.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:34 vm10 bash[23387]: cluster 2026-03-09T21:14:33.486344+0000 mon.a (mon.0) 512 : cluster [INF] osd.5 v2:192.168.123.110:6804/1216077544 boot 2026-03-09T21:14:34.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:34 vm10 bash[23387]: cluster 2026-03-09T21:14:33.486344+0000 mon.a (mon.0) 512 : cluster [INF] osd.5 v2:192.168.123.110:6804/1216077544 boot 2026-03-09T21:14:34.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:34 vm10 bash[23387]: cluster 2026-03-09T21:14:33.486400+0000 mon.a (mon.0) 513 : cluster [DBG] osdmap e36: 6 total, 6 up, 6 in 2026-03-09T21:14:34.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:34 vm10 bash[23387]: cluster 2026-03-09T21:14:33.486400+0000 mon.a (mon.0) 513 : cluster [DBG] osdmap e36: 6 total, 6 up, 6 in 2026-03-09T21:14:34.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:34 vm10 bash[23387]: audit 2026-03-09T21:14:33.493257+0000 mon.a (mon.0) 514 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:14:34.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:34 vm10 bash[23387]: audit 2026-03-09T21:14:33.493257+0000 mon.a (mon.0) 514 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:14:34.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:34 vm10 bash[23387]: audit 2026-03-09T21:14:33.536852+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:34.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:34 vm10 bash[23387]: audit 2026-03-09T21:14:33.536852+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:34.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:34 vm10 bash[23387]: audit 2026-03-09T21:14:33.543477+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:34.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:34 vm10 bash[23387]: audit 2026-03-09T21:14:33.543477+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:34.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:34 vm10 bash[23387]: audit 2026-03-09T21:14:33.544683+0000 mon.a (mon.0) 517 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:34.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:34 vm10 bash[23387]: audit 2026-03-09T21:14:33.544683+0000 mon.a (mon.0) 517 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:34.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:34 vm10 bash[23387]: audit 2026-03-09T21:14:33.545441+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:14:34.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:34 vm10 bash[23387]: audit 2026-03-09T21:14:33.545441+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:14:34.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:34 vm10 bash[23387]: audit 2026-03-09T21:14:33.554263+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:34.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:34 vm10 bash[23387]: audit 2026-03-09T21:14:33.554263+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:34.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:34 vm10 bash[23387]: cluster 2026-03-09T21:14:33.973501+0000 mon.a (mon.0) 520 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-09T21:14:34.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:34 vm10 bash[23387]: cluster 2026-03-09T21:14:33.973501+0000 mon.a (mon.0) 520 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-09T21:14:34.791 INFO:teuthology.orchestra.run.vm10.stdout:Created osd(s) 5 on host 'vm10' 2026-03-09T21:14:34.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:34 vm07 bash[20771]: cluster 2026-03-09T21:14:32.010533+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:34 vm07 bash[20771]: cluster 2026-03-09T21:14:32.010533+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:34 vm07 bash[20771]: cluster 2026-03-09T21:14:32.010599+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:34 vm07 bash[20771]: cluster 2026-03-09T21:14:32.010599+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:34 vm07 bash[20771]: cluster 2026-03-09T21:14:32.612936+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:34 vm07 bash[20771]: cluster 2026-03-09T21:14:32.612936+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:34 vm07 bash[20771]: cluster 2026-03-09T21:14:33.486344+0000 mon.a (mon.0) 512 : cluster [INF] osd.5 v2:192.168.123.110:6804/1216077544 boot 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:34 vm07 bash[20771]: cluster 2026-03-09T21:14:33.486344+0000 mon.a (mon.0) 512 : cluster [INF] osd.5 v2:192.168.123.110:6804/1216077544 boot 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:34 vm07 bash[20771]: cluster 2026-03-09T21:14:33.486400+0000 mon.a (mon.0) 513 : cluster [DBG] osdmap e36: 6 total, 6 up, 6 in 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:34 vm07 bash[20771]: cluster 2026-03-09T21:14:33.486400+0000 mon.a (mon.0) 513 : cluster [DBG] osdmap e36: 6 total, 6 up, 6 in 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:34 vm07 bash[20771]: audit 2026-03-09T21:14:33.493257+0000 mon.a (mon.0) 514 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:34 vm07 bash[20771]: audit 2026-03-09T21:14:33.493257+0000 mon.a (mon.0) 514 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:34 vm07 bash[20771]: audit 2026-03-09T21:14:33.536852+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:34 vm07 bash[20771]: audit 2026-03-09T21:14:33.536852+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:34 vm07 bash[20771]: audit 2026-03-09T21:14:33.543477+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:34 vm07 bash[20771]: audit 2026-03-09T21:14:33.543477+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:34 vm07 bash[20771]: audit 2026-03-09T21:14:33.544683+0000 mon.a (mon.0) 517 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:34 vm07 bash[20771]: audit 2026-03-09T21:14:33.544683+0000 mon.a (mon.0) 517 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:34 vm07 bash[20771]: audit 2026-03-09T21:14:33.545441+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:34 vm07 bash[20771]: audit 2026-03-09T21:14:33.545441+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:34 vm07 bash[20771]: audit 2026-03-09T21:14:33.554263+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:34 vm07 bash[20771]: audit 2026-03-09T21:14:33.554263+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:34 vm07 bash[20771]: cluster 2026-03-09T21:14:33.973501+0000 mon.a (mon.0) 520 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:34 vm07 bash[20771]: cluster 2026-03-09T21:14:33.973501+0000 mon.a (mon.0) 520 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:34 vm07 bash[28052]: cluster 2026-03-09T21:14:32.010533+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:34 vm07 bash[28052]: cluster 2026-03-09T21:14:32.010533+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:34 vm07 bash[28052]: cluster 2026-03-09T21:14:32.010599+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:34 vm07 bash[28052]: cluster 2026-03-09T21:14:32.010599+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:34 vm07 bash[28052]: cluster 2026-03-09T21:14:32.612936+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:34 vm07 bash[28052]: cluster 2026-03-09T21:14:32.612936+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:34 vm07 bash[28052]: cluster 2026-03-09T21:14:33.486344+0000 mon.a (mon.0) 512 : cluster [INF] osd.5 v2:192.168.123.110:6804/1216077544 boot 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:34 vm07 bash[28052]: cluster 2026-03-09T21:14:33.486344+0000 mon.a (mon.0) 512 : cluster [INF] osd.5 v2:192.168.123.110:6804/1216077544 boot 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:34 vm07 bash[28052]: cluster 2026-03-09T21:14:33.486400+0000 mon.a (mon.0) 513 : cluster [DBG] osdmap e36: 6 total, 6 up, 6 in 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:34 vm07 bash[28052]: cluster 2026-03-09T21:14:33.486400+0000 mon.a (mon.0) 513 : cluster [DBG] osdmap e36: 6 total, 6 up, 6 in 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:34 vm07 bash[28052]: audit 2026-03-09T21:14:33.493257+0000 mon.a (mon.0) 514 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:34 vm07 bash[28052]: audit 2026-03-09T21:14:33.493257+0000 mon.a (mon.0) 514 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:34 vm07 bash[28052]: audit 2026-03-09T21:14:33.536852+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:34 vm07 bash[28052]: audit 2026-03-09T21:14:33.536852+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:34 vm07 bash[28052]: audit 2026-03-09T21:14:33.543477+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:34.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:34 vm07 bash[28052]: audit 2026-03-09T21:14:33.543477+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:34.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:34 vm07 bash[28052]: audit 2026-03-09T21:14:33.544683+0000 mon.a (mon.0) 517 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:34.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:34 vm07 bash[28052]: audit 2026-03-09T21:14:33.544683+0000 mon.a (mon.0) 517 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:34.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:34 vm07 bash[28052]: audit 2026-03-09T21:14:33.545441+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:14:34.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:34 vm07 bash[28052]: audit 2026-03-09T21:14:33.545441+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:14:34.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:34 vm07 bash[28052]: audit 2026-03-09T21:14:33.554263+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:34.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:34 vm07 bash[28052]: audit 2026-03-09T21:14:33.554263+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:34.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:34 vm07 bash[28052]: cluster 2026-03-09T21:14:33.973501+0000 mon.a (mon.0) 520 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-09T21:14:34.867 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:34 vm07 bash[28052]: cluster 2026-03-09T21:14:33.973501+0000 mon.a (mon.0) 520 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-09T21:14:34.912 DEBUG:teuthology.orchestra.run.vm10:osd.5> sudo journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@osd.5.service 2026-03-09T21:14:34.913 INFO:tasks.cephadm:Deploying osd.6 on vm10 with /dev/vdc... 2026-03-09T21:14:34.913 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- lvm zap /dev/vdc 2026-03-09T21:14:36.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:35 vm07 bash[20771]: audit 2026-03-09T21:14:34.755821+0000 mon.a (mon.0) 521 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:14:36.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:35 vm07 bash[20771]: audit 2026-03-09T21:14:34.755821+0000 mon.a (mon.0) 521 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:14:36.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:35 vm07 bash[20771]: audit 2026-03-09T21:14:34.779276+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:36.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:35 vm07 bash[20771]: audit 2026-03-09T21:14:34.779276+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:36.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:35 vm07 bash[20771]: audit 2026-03-09T21:14:34.786002+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:36.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:35 vm07 bash[20771]: audit 2026-03-09T21:14:34.786002+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:36.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:35 vm07 bash[20771]: cluster 2026-03-09T21:14:35.179060+0000 mon.a (mon.0) 524 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-09T21:14:36.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:35 vm07 bash[20771]: cluster 2026-03-09T21:14:35.179060+0000 mon.a (mon.0) 524 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-09T21:14:36.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:35 vm07 bash[28052]: audit 2026-03-09T21:14:34.755821+0000 mon.a (mon.0) 521 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:14:36.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:35 vm07 bash[28052]: audit 2026-03-09T21:14:34.755821+0000 mon.a (mon.0) 521 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:14:36.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:35 vm07 bash[28052]: audit 2026-03-09T21:14:34.779276+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:36.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:35 vm07 bash[28052]: audit 2026-03-09T21:14:34.779276+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:36.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:35 vm07 bash[28052]: audit 2026-03-09T21:14:34.786002+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:36.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:35 vm07 bash[28052]: audit 2026-03-09T21:14:34.786002+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:36.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:35 vm07 bash[28052]: cluster 2026-03-09T21:14:35.179060+0000 mon.a (mon.0) 524 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-09T21:14:36.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:35 vm07 bash[28052]: cluster 2026-03-09T21:14:35.179060+0000 mon.a (mon.0) 524 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-09T21:14:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:35 vm10 bash[23387]: audit 2026-03-09T21:14:34.755821+0000 mon.a (mon.0) 521 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:14:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:35 vm10 bash[23387]: audit 2026-03-09T21:14:34.755821+0000 mon.a (mon.0) 521 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:14:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:35 vm10 bash[23387]: audit 2026-03-09T21:14:34.779276+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:35 vm10 bash[23387]: audit 2026-03-09T21:14:34.779276+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:35 vm10 bash[23387]: audit 2026-03-09T21:14:34.786002+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:35 vm10 bash[23387]: audit 2026-03-09T21:14:34.786002+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:35 vm10 bash[23387]: cluster 2026-03-09T21:14:35.179060+0000 mon.a (mon.0) 524 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-09T21:14:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:35 vm10 bash[23387]: cluster 2026-03-09T21:14:35.179060+0000 mon.a (mon.0) 524 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-09T21:14:37.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:36 vm07 bash[20771]: cluster 2026-03-09T21:14:34.613373+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v157: 1 pgs: 1 peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:37.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:36 vm07 bash[20771]: cluster 2026-03-09T21:14:34.613373+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v157: 1 pgs: 1 peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:37.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:36 vm07 bash[20771]: cluster 2026-03-09T21:14:35.511913+0000 mon.a (mon.0) 525 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T21:14:37.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:36 vm07 bash[20771]: cluster 2026-03-09T21:14:35.511913+0000 mon.a (mon.0) 525 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T21:14:37.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:36 vm07 bash[28052]: cluster 2026-03-09T21:14:34.613373+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v157: 1 pgs: 1 peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:37.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:36 vm07 bash[28052]: cluster 2026-03-09T21:14:34.613373+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v157: 1 pgs: 1 peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:37.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:36 vm07 bash[28052]: cluster 2026-03-09T21:14:35.511913+0000 mon.a (mon.0) 525 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T21:14:37.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:36 vm07 bash[28052]: cluster 2026-03-09T21:14:35.511913+0000 mon.a (mon.0) 525 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T21:14:37.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:36 vm10 bash[23387]: cluster 2026-03-09T21:14:34.613373+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v157: 1 pgs: 1 peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:37.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:36 vm10 bash[23387]: cluster 2026-03-09T21:14:34.613373+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v157: 1 pgs: 1 peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:37.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:36 vm10 bash[23387]: cluster 2026-03-09T21:14:35.511913+0000 mon.a (mon.0) 525 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T21:14:37.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:36 vm10 bash[23387]: cluster 2026-03-09T21:14:35.511913+0000 mon.a (mon.0) 525 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T21:14:38.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:37 vm07 bash[20771]: cluster 2026-03-09T21:14:36.613802+0000 mgr.y (mgr.14150) 182 : cluster [DBG] pgmap v159: 1 pgs: 1 peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:38.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:37 vm07 bash[20771]: cluster 2026-03-09T21:14:36.613802+0000 mgr.y (mgr.14150) 182 : cluster [DBG] pgmap v159: 1 pgs: 1 peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:38.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:37 vm07 bash[28052]: cluster 2026-03-09T21:14:36.613802+0000 mgr.y (mgr.14150) 182 : cluster [DBG] pgmap v159: 1 pgs: 1 peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:38.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:37 vm07 bash[28052]: cluster 2026-03-09T21:14:36.613802+0000 mgr.y (mgr.14150) 182 : cluster [DBG] pgmap v159: 1 pgs: 1 peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:38.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:37 vm10 bash[23387]: cluster 2026-03-09T21:14:36.613802+0000 mgr.y (mgr.14150) 182 : cluster [DBG] pgmap v159: 1 pgs: 1 peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:38.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:37 vm10 bash[23387]: cluster 2026-03-09T21:14:36.613802+0000 mgr.y (mgr.14150) 182 : cluster [DBG] pgmap v159: 1 pgs: 1 peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:39.591 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.b/config 2026-03-09T21:14:39.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:39 vm10 bash[23387]: cluster 2026-03-09T21:14:38.614152+0000 mgr.y (mgr.14150) 183 : cluster [DBG] pgmap v160: 1 pgs: 1 peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:39.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:39 vm10 bash[23387]: cluster 2026-03-09T21:14:38.614152+0000 mgr.y (mgr.14150) 183 : cluster [DBG] pgmap v160: 1 pgs: 1 peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:40.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:39 vm07 bash[20771]: cluster 2026-03-09T21:14:38.614152+0000 mgr.y (mgr.14150) 183 : cluster [DBG] pgmap v160: 1 pgs: 1 peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:40.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:39 vm07 bash[20771]: cluster 2026-03-09T21:14:38.614152+0000 mgr.y (mgr.14150) 183 : cluster [DBG] pgmap v160: 1 pgs: 1 peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:40.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:39 vm07 bash[28052]: cluster 2026-03-09T21:14:38.614152+0000 mgr.y (mgr.14150) 183 : cluster [DBG] pgmap v160: 1 pgs: 1 peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:40.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:39 vm07 bash[28052]: cluster 2026-03-09T21:14:38.614152+0000 mgr.y (mgr.14150) 183 : cluster [DBG] pgmap v160: 1 pgs: 1 peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:40.569 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:14:40.589 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph orch daemon add osd vm10:/dev/vdc 2026-03-09T21:14:41.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:40 vm10 bash[23387]: cluster 2026-03-09T21:14:40.791521+0000 mon.a (mon.0) 526 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T21:14:41.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:40 vm10 bash[23387]: cluster 2026-03-09T21:14:40.791521+0000 mon.a (mon.0) 526 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T21:14:41.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:40 vm10 bash[23387]: cluster 2026-03-09T21:14:40.791560+0000 mon.a (mon.0) 527 : cluster [INF] Cluster is now healthy 2026-03-09T21:14:41.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:40 vm10 bash[23387]: cluster 2026-03-09T21:14:40.791560+0000 mon.a (mon.0) 527 : cluster [INF] Cluster is now healthy 2026-03-09T21:14:41.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:40 vm07 bash[20771]: cluster 2026-03-09T21:14:40.791521+0000 mon.a (mon.0) 526 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T21:14:41.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:40 vm07 bash[20771]: cluster 2026-03-09T21:14:40.791521+0000 mon.a (mon.0) 526 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T21:14:41.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:40 vm07 bash[20771]: cluster 2026-03-09T21:14:40.791560+0000 mon.a (mon.0) 527 : cluster [INF] Cluster is now healthy 2026-03-09T21:14:41.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:40 vm07 bash[20771]: cluster 2026-03-09T21:14:40.791560+0000 mon.a (mon.0) 527 : cluster [INF] Cluster is now healthy 2026-03-09T21:14:41.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:40 vm07 bash[28052]: cluster 2026-03-09T21:14:40.791521+0000 mon.a (mon.0) 526 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T21:14:41.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:40 vm07 bash[28052]: cluster 2026-03-09T21:14:40.791521+0000 mon.a (mon.0) 526 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T21:14:41.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:40 vm07 bash[28052]: cluster 2026-03-09T21:14:40.791560+0000 mon.a (mon.0) 527 : cluster [INF] Cluster is now healthy 2026-03-09T21:14:41.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:40 vm07 bash[28052]: cluster 2026-03-09T21:14:40.791560+0000 mon.a (mon.0) 527 : cluster [INF] Cluster is now healthy 2026-03-09T21:14:41.897 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:41 vm10 bash[23387]: cluster 2026-03-09T21:14:40.614477+0000 mgr.y (mgr.14150) 184 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:41.897 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:41 vm10 bash[23387]: cluster 2026-03-09T21:14:40.614477+0000 mgr.y (mgr.14150) 184 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:41.897 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:41 vm10 bash[23387]: audit 2026-03-09T21:14:41.643276+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:41.897 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:41 vm10 bash[23387]: audit 2026-03-09T21:14:41.643276+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:41.897 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:41 vm10 bash[23387]: audit 2026-03-09T21:14:41.650372+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:41.897 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:41 vm10 bash[23387]: audit 2026-03-09T21:14:41.650372+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:41.897 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:41 vm10 bash[23387]: audit 2026-03-09T21:14:41.651361+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:14:41.897 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:41 vm10 bash[23387]: audit 2026-03-09T21:14:41.651361+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:14:41.897 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:41 vm10 bash[23387]: audit 2026-03-09T21:14:41.651854+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:14:41.897 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:41 vm10 bash[23387]: audit 2026-03-09T21:14:41.651854+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:14:41.897 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:41 vm10 bash[23387]: audit 2026-03-09T21:14:41.653153+0000 mon.a (mon.0) 532 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:41.897 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:41 vm10 bash[23387]: audit 2026-03-09T21:14:41.653153+0000 mon.a (mon.0) 532 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:41.897 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:41 vm10 bash[23387]: audit 2026-03-09T21:14:41.653646+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:14:41.897 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:41 vm10 bash[23387]: audit 2026-03-09T21:14:41.653646+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:14:41.898 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:41 vm10 bash[23387]: audit 2026-03-09T21:14:41.659977+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:41.898 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:41 vm10 bash[23387]: audit 2026-03-09T21:14:41.659977+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:42.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:41 vm07 bash[20771]: cluster 2026-03-09T21:14:40.614477+0000 mgr.y (mgr.14150) 184 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:42.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:41 vm07 bash[20771]: cluster 2026-03-09T21:14:40.614477+0000 mgr.y (mgr.14150) 184 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:42.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:41 vm07 bash[20771]: audit 2026-03-09T21:14:41.643276+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:42.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:41 vm07 bash[20771]: audit 2026-03-09T21:14:41.643276+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:42.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:41 vm07 bash[20771]: audit 2026-03-09T21:14:41.650372+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:42.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:41 vm07 bash[20771]: audit 2026-03-09T21:14:41.650372+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:42.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:41 vm07 bash[20771]: audit 2026-03-09T21:14:41.651361+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:14:42.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:41 vm07 bash[20771]: audit 2026-03-09T21:14:41.651361+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:14:42.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:41 vm07 bash[20771]: audit 2026-03-09T21:14:41.651854+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:14:42.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:41 vm07 bash[20771]: audit 2026-03-09T21:14:41.651854+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:14:42.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:41 vm07 bash[20771]: audit 2026-03-09T21:14:41.653153+0000 mon.a (mon.0) 532 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:42.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:41 vm07 bash[20771]: audit 2026-03-09T21:14:41.653153+0000 mon.a (mon.0) 532 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:42.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:41 vm07 bash[20771]: audit 2026-03-09T21:14:41.653646+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:14:42.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:41 vm07 bash[20771]: audit 2026-03-09T21:14:41.653646+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:14:42.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:41 vm07 bash[20771]: audit 2026-03-09T21:14:41.659977+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:42.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:41 vm07 bash[20771]: audit 2026-03-09T21:14:41.659977+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:42.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:41 vm07 bash[28052]: cluster 2026-03-09T21:14:40.614477+0000 mgr.y (mgr.14150) 184 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:42.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:41 vm07 bash[28052]: cluster 2026-03-09T21:14:40.614477+0000 mgr.y (mgr.14150) 184 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:42.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:41 vm07 bash[28052]: audit 2026-03-09T21:14:41.643276+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:42.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:41 vm07 bash[28052]: audit 2026-03-09T21:14:41.643276+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:42.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:41 vm07 bash[28052]: audit 2026-03-09T21:14:41.650372+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:42.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:41 vm07 bash[28052]: audit 2026-03-09T21:14:41.650372+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:42.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:41 vm07 bash[28052]: audit 2026-03-09T21:14:41.651361+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:14:42.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:41 vm07 bash[28052]: audit 2026-03-09T21:14:41.651361+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:14:42.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:41 vm07 bash[28052]: audit 2026-03-09T21:14:41.651854+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:14:42.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:41 vm07 bash[28052]: audit 2026-03-09T21:14:41.651854+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:14:42.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:41 vm07 bash[28052]: audit 2026-03-09T21:14:41.653153+0000 mon.a (mon.0) 532 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:42.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:41 vm07 bash[28052]: audit 2026-03-09T21:14:41.653153+0000 mon.a (mon.0) 532 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:42.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:41 vm07 bash[28052]: audit 2026-03-09T21:14:41.653646+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:14:42.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:41 vm07 bash[28052]: audit 2026-03-09T21:14:41.653646+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:14:42.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:41 vm07 bash[28052]: audit 2026-03-09T21:14:41.659977+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:42.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:41 vm07 bash[28052]: audit 2026-03-09T21:14:41.659977+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:14:43.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:42 vm10 bash[23387]: cephadm 2026-03-09T21:14:41.635931+0000 mgr.y (mgr.14150) 185 : cephadm [INF] Detected new or changed devices on vm10 2026-03-09T21:14:43.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:42 vm10 bash[23387]: cephadm 2026-03-09T21:14:41.635931+0000 mgr.y (mgr.14150) 185 : cephadm [INF] Detected new or changed devices on vm10 2026-03-09T21:14:43.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:42 vm10 bash[23387]: cephadm 2026-03-09T21:14:41.652238+0000 mgr.y (mgr.14150) 186 : cephadm [INF] Adjusting osd_memory_target on vm10 to 227.8M 2026-03-09T21:14:43.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:42 vm10 bash[23387]: cephadm 2026-03-09T21:14:41.652238+0000 mgr.y (mgr.14150) 186 : cephadm [INF] Adjusting osd_memory_target on vm10 to 227.8M 2026-03-09T21:14:43.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:42 vm10 bash[23387]: cephadm 2026-03-09T21:14:41.652734+0000 mgr.y (mgr.14150) 187 : cephadm [WRN] Unable to set osd_memory_target on vm10 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-09T21:14:43.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:42 vm10 bash[23387]: cephadm 2026-03-09T21:14:41.652734+0000 mgr.y (mgr.14150) 187 : cephadm [WRN] Unable to set osd_memory_target on vm10 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-09T21:14:43.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:42 vm07 bash[20771]: cephadm 2026-03-09T21:14:41.635931+0000 mgr.y (mgr.14150) 185 : cephadm [INF] Detected new or changed devices on vm10 2026-03-09T21:14:43.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:42 vm07 bash[20771]: cephadm 2026-03-09T21:14:41.635931+0000 mgr.y (mgr.14150) 185 : cephadm [INF] Detected new or changed devices on vm10 2026-03-09T21:14:43.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:42 vm07 bash[20771]: cephadm 2026-03-09T21:14:41.652238+0000 mgr.y (mgr.14150) 186 : cephadm [INF] Adjusting osd_memory_target on vm10 to 227.8M 2026-03-09T21:14:43.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:42 vm07 bash[20771]: cephadm 2026-03-09T21:14:41.652238+0000 mgr.y (mgr.14150) 186 : cephadm [INF] Adjusting osd_memory_target on vm10 to 227.8M 2026-03-09T21:14:43.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:42 vm07 bash[20771]: cephadm 2026-03-09T21:14:41.652734+0000 mgr.y (mgr.14150) 187 : cephadm [WRN] Unable to set osd_memory_target on vm10 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-09T21:14:43.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:42 vm07 bash[20771]: cephadm 2026-03-09T21:14:41.652734+0000 mgr.y (mgr.14150) 187 : cephadm [WRN] Unable to set osd_memory_target on vm10 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-09T21:14:43.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:42 vm07 bash[28052]: cephadm 2026-03-09T21:14:41.635931+0000 mgr.y (mgr.14150) 185 : cephadm [INF] Detected new or changed devices on vm10 2026-03-09T21:14:43.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:42 vm07 bash[28052]: cephadm 2026-03-09T21:14:41.635931+0000 mgr.y (mgr.14150) 185 : cephadm [INF] Detected new or changed devices on vm10 2026-03-09T21:14:43.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:42 vm07 bash[28052]: cephadm 2026-03-09T21:14:41.652238+0000 mgr.y (mgr.14150) 186 : cephadm [INF] Adjusting osd_memory_target on vm10 to 227.8M 2026-03-09T21:14:43.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:42 vm07 bash[28052]: cephadm 2026-03-09T21:14:41.652238+0000 mgr.y (mgr.14150) 186 : cephadm [INF] Adjusting osd_memory_target on vm10 to 227.8M 2026-03-09T21:14:43.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:42 vm07 bash[28052]: cephadm 2026-03-09T21:14:41.652734+0000 mgr.y (mgr.14150) 187 : cephadm [WRN] Unable to set osd_memory_target on vm10 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-09T21:14:43.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:42 vm07 bash[28052]: cephadm 2026-03-09T21:14:41.652734+0000 mgr.y (mgr.14150) 187 : cephadm [WRN] Unable to set osd_memory_target on vm10 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-09T21:14:44.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:43 vm10 bash[23387]: cluster 2026-03-09T21:14:42.614822+0000 mgr.y (mgr.14150) 188 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:44.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:43 vm10 bash[23387]: cluster 2026-03-09T21:14:42.614822+0000 mgr.y (mgr.14150) 188 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:44.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:43 vm07 bash[20771]: cluster 2026-03-09T21:14:42.614822+0000 mgr.y (mgr.14150) 188 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:44.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:43 vm07 bash[20771]: cluster 2026-03-09T21:14:42.614822+0000 mgr.y (mgr.14150) 188 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:44.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:43 vm07 bash[28052]: cluster 2026-03-09T21:14:42.614822+0000 mgr.y (mgr.14150) 188 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:44.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:43 vm07 bash[28052]: cluster 2026-03-09T21:14:42.614822+0000 mgr.y (mgr.14150) 188 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:45.224 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.b/config 2026-03-09T21:14:46.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:45 vm10 bash[23387]: cluster 2026-03-09T21:14:44.615144+0000 mgr.y (mgr.14150) 189 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-09T21:14:46.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:45 vm10 bash[23387]: cluster 2026-03-09T21:14:44.615144+0000 mgr.y (mgr.14150) 189 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-09T21:14:46.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:45 vm10 bash[23387]: audit 2026-03-09T21:14:45.541738+0000 mon.a (mon.0) 535 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:14:46.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:45 vm10 bash[23387]: audit 2026-03-09T21:14:45.541738+0000 mon.a (mon.0) 535 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:14:46.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:45 vm10 bash[23387]: audit 2026-03-09T21:14:45.543808+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:14:46.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:45 vm10 bash[23387]: audit 2026-03-09T21:14:45.543808+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:14:46.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:45 vm10 bash[23387]: audit 2026-03-09T21:14:45.544406+0000 mon.a (mon.0) 537 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:46.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:45 vm10 bash[23387]: audit 2026-03-09T21:14:45.544406+0000 mon.a (mon.0) 537 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:46.367 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:45 vm07 bash[20771]: cluster 2026-03-09T21:14:44.615144+0000 mgr.y (mgr.14150) 189 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-09T21:14:46.368 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:45 vm07 bash[20771]: cluster 2026-03-09T21:14:44.615144+0000 mgr.y (mgr.14150) 189 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-09T21:14:46.368 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:45 vm07 bash[20771]: audit 2026-03-09T21:14:45.541738+0000 mon.a (mon.0) 535 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:14:46.368 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:45 vm07 bash[20771]: audit 2026-03-09T21:14:45.541738+0000 mon.a (mon.0) 535 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:14:46.368 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:45 vm07 bash[20771]: audit 2026-03-09T21:14:45.543808+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:14:46.368 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:45 vm07 bash[20771]: audit 2026-03-09T21:14:45.543808+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:14:46.368 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:45 vm07 bash[20771]: audit 2026-03-09T21:14:45.544406+0000 mon.a (mon.0) 537 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:46.368 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:45 vm07 bash[20771]: audit 2026-03-09T21:14:45.544406+0000 mon.a (mon.0) 537 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:46.368 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:45 vm07 bash[28052]: cluster 2026-03-09T21:14:44.615144+0000 mgr.y (mgr.14150) 189 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-09T21:14:46.368 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:45 vm07 bash[28052]: cluster 2026-03-09T21:14:44.615144+0000 mgr.y (mgr.14150) 189 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-09T21:14:46.368 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:45 vm07 bash[28052]: audit 2026-03-09T21:14:45.541738+0000 mon.a (mon.0) 535 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:14:46.368 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:45 vm07 bash[28052]: audit 2026-03-09T21:14:45.541738+0000 mon.a (mon.0) 535 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:14:46.368 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:45 vm07 bash[28052]: audit 2026-03-09T21:14:45.543808+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:14:46.368 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:45 vm07 bash[28052]: audit 2026-03-09T21:14:45.543808+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:14:46.368 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:45 vm07 bash[28052]: audit 2026-03-09T21:14:45.544406+0000 mon.a (mon.0) 537 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:46.368 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:45 vm07 bash[28052]: audit 2026-03-09T21:14:45.544406+0000 mon.a (mon.0) 537 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:14:47.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:46 vm10 bash[23387]: audit 2026-03-09T21:14:45.539893+0000 mgr.y (mgr.14150) 190 : audit [DBG] from='client.24250 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:14:47.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:46 vm10 bash[23387]: audit 2026-03-09T21:14:45.539893+0000 mgr.y (mgr.14150) 190 : audit [DBG] from='client.24250 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:14:47.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:46 vm07 bash[20771]: audit 2026-03-09T21:14:45.539893+0000 mgr.y (mgr.14150) 190 : audit [DBG] from='client.24250 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:14:47.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:46 vm07 bash[20771]: audit 2026-03-09T21:14:45.539893+0000 mgr.y (mgr.14150) 190 : audit [DBG] from='client.24250 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:14:47.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:46 vm07 bash[28052]: audit 2026-03-09T21:14:45.539893+0000 mgr.y (mgr.14150) 190 : audit [DBG] from='client.24250 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:14:47.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:46 vm07 bash[28052]: audit 2026-03-09T21:14:45.539893+0000 mgr.y (mgr.14150) 190 : audit [DBG] from='client.24250 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:14:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:47 vm10 bash[23387]: cluster 2026-03-09T21:14:46.615476+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 39 KiB/s, 0 objects/s recovering 2026-03-09T21:14:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:47 vm10 bash[23387]: cluster 2026-03-09T21:14:46.615476+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 39 KiB/s, 0 objects/s recovering 2026-03-09T21:14:48.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:47 vm07 bash[20771]: cluster 2026-03-09T21:14:46.615476+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 39 KiB/s, 0 objects/s recovering 2026-03-09T21:14:48.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:47 vm07 bash[20771]: cluster 2026-03-09T21:14:46.615476+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 39 KiB/s, 0 objects/s recovering 2026-03-09T21:14:48.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:47 vm07 bash[28052]: cluster 2026-03-09T21:14:46.615476+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 39 KiB/s, 0 objects/s recovering 2026-03-09T21:14:48.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:47 vm07 bash[28052]: cluster 2026-03-09T21:14:46.615476+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 39 KiB/s, 0 objects/s recovering 2026-03-09T21:14:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:49 vm10 bash[23387]: cluster 2026-03-09T21:14:48.615846+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T21:14:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:49 vm10 bash[23387]: cluster 2026-03-09T21:14:48.615846+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T21:14:50.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:49 vm07 bash[20771]: cluster 2026-03-09T21:14:48.615846+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T21:14:50.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:49 vm07 bash[20771]: cluster 2026-03-09T21:14:48.615846+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T21:14:50.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:49 vm07 bash[28052]: cluster 2026-03-09T21:14:48.615846+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T21:14:50.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:49 vm07 bash[28052]: cluster 2026-03-09T21:14:48.615846+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T21:14:52.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:51 vm07 bash[28052]: cluster 2026-03-09T21:14:50.616198+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T21:14:52.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:51 vm07 bash[28052]: cluster 2026-03-09T21:14:50.616198+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T21:14:52.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:51 vm07 bash[28052]: audit 2026-03-09T21:14:51.104860+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.110:0/527022751' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b9ca0fe4-bec8-42a3-9f19-f8c556e71c46"}]: dispatch 2026-03-09T21:14:52.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:51 vm07 bash[28052]: audit 2026-03-09T21:14:51.104860+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.110:0/527022751' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b9ca0fe4-bec8-42a3-9f19-f8c556e71c46"}]: dispatch 2026-03-09T21:14:52.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:51 vm07 bash[28052]: audit 2026-03-09T21:14:51.105753+0000 mon.a (mon.0) 538 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b9ca0fe4-bec8-42a3-9f19-f8c556e71c46"}]: dispatch 2026-03-09T21:14:52.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:51 vm07 bash[28052]: audit 2026-03-09T21:14:51.105753+0000 mon.a (mon.0) 538 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b9ca0fe4-bec8-42a3-9f19-f8c556e71c46"}]: dispatch 2026-03-09T21:14:52.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:51 vm07 bash[28052]: audit 2026-03-09T21:14:51.110088+0000 mon.a (mon.0) 539 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b9ca0fe4-bec8-42a3-9f19-f8c556e71c46"}]': finished 2026-03-09T21:14:52.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:51 vm07 bash[28052]: audit 2026-03-09T21:14:51.110088+0000 mon.a (mon.0) 539 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b9ca0fe4-bec8-42a3-9f19-f8c556e71c46"}]': finished 2026-03-09T21:14:52.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:51 vm07 bash[28052]: cluster 2026-03-09T21:14:51.116044+0000 mon.a (mon.0) 540 : cluster [DBG] osdmap e39: 7 total, 6 up, 7 in 2026-03-09T21:14:52.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:51 vm07 bash[28052]: cluster 2026-03-09T21:14:51.116044+0000 mon.a (mon.0) 540 : cluster [DBG] osdmap e39: 7 total, 6 up, 7 in 2026-03-09T21:14:52.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:51 vm07 bash[28052]: audit 2026-03-09T21:14:51.116497+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:14:52.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:51 vm07 bash[28052]: audit 2026-03-09T21:14:51.116497+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:14:52.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:51 vm07 bash[28052]: audit 2026-03-09T21:14:51.812135+0000 mon.b (mon.1) 19 : audit [DBG] from='client.? 192.168.123.110:0/4181626789' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:14:52.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:51 vm07 bash[28052]: audit 2026-03-09T21:14:51.812135+0000 mon.b (mon.1) 19 : audit [DBG] from='client.? 192.168.123.110:0/4181626789' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:14:52.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:51 vm07 bash[20771]: cluster 2026-03-09T21:14:50.616198+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T21:14:52.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:51 vm07 bash[20771]: cluster 2026-03-09T21:14:50.616198+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T21:14:52.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:51 vm07 bash[20771]: audit 2026-03-09T21:14:51.104860+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.110:0/527022751' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b9ca0fe4-bec8-42a3-9f19-f8c556e71c46"}]: dispatch 2026-03-09T21:14:52.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:51 vm07 bash[20771]: audit 2026-03-09T21:14:51.104860+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.110:0/527022751' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b9ca0fe4-bec8-42a3-9f19-f8c556e71c46"}]: dispatch 2026-03-09T21:14:52.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:51 vm07 bash[20771]: audit 2026-03-09T21:14:51.105753+0000 mon.a (mon.0) 538 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b9ca0fe4-bec8-42a3-9f19-f8c556e71c46"}]: dispatch 2026-03-09T21:14:52.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:51 vm07 bash[20771]: audit 2026-03-09T21:14:51.105753+0000 mon.a (mon.0) 538 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b9ca0fe4-bec8-42a3-9f19-f8c556e71c46"}]: dispatch 2026-03-09T21:14:52.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:51 vm07 bash[20771]: audit 2026-03-09T21:14:51.110088+0000 mon.a (mon.0) 539 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b9ca0fe4-bec8-42a3-9f19-f8c556e71c46"}]': finished 2026-03-09T21:14:52.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:51 vm07 bash[20771]: audit 2026-03-09T21:14:51.110088+0000 mon.a (mon.0) 539 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b9ca0fe4-bec8-42a3-9f19-f8c556e71c46"}]': finished 2026-03-09T21:14:52.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:51 vm07 bash[20771]: cluster 2026-03-09T21:14:51.116044+0000 mon.a (mon.0) 540 : cluster [DBG] osdmap e39: 7 total, 6 up, 7 in 2026-03-09T21:14:52.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:51 vm07 bash[20771]: cluster 2026-03-09T21:14:51.116044+0000 mon.a (mon.0) 540 : cluster [DBG] osdmap e39: 7 total, 6 up, 7 in 2026-03-09T21:14:52.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:51 vm07 bash[20771]: audit 2026-03-09T21:14:51.116497+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:14:52.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:51 vm07 bash[20771]: audit 2026-03-09T21:14:51.116497+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:14:52.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:51 vm07 bash[20771]: audit 2026-03-09T21:14:51.812135+0000 mon.b (mon.1) 19 : audit [DBG] from='client.? 192.168.123.110:0/4181626789' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:14:52.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:51 vm07 bash[20771]: audit 2026-03-09T21:14:51.812135+0000 mon.b (mon.1) 19 : audit [DBG] from='client.? 192.168.123.110:0/4181626789' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:14:52.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:51 vm10 bash[23387]: cluster 2026-03-09T21:14:50.616198+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T21:14:52.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:51 vm10 bash[23387]: cluster 2026-03-09T21:14:50.616198+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T21:14:52.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:51 vm10 bash[23387]: audit 2026-03-09T21:14:51.104860+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.110:0/527022751' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b9ca0fe4-bec8-42a3-9f19-f8c556e71c46"}]: dispatch 2026-03-09T21:14:52.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:51 vm10 bash[23387]: audit 2026-03-09T21:14:51.104860+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.110:0/527022751' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b9ca0fe4-bec8-42a3-9f19-f8c556e71c46"}]: dispatch 2026-03-09T21:14:52.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:51 vm10 bash[23387]: audit 2026-03-09T21:14:51.105753+0000 mon.a (mon.0) 538 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b9ca0fe4-bec8-42a3-9f19-f8c556e71c46"}]: dispatch 2026-03-09T21:14:52.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:51 vm10 bash[23387]: audit 2026-03-09T21:14:51.105753+0000 mon.a (mon.0) 538 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b9ca0fe4-bec8-42a3-9f19-f8c556e71c46"}]: dispatch 2026-03-09T21:14:52.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:51 vm10 bash[23387]: audit 2026-03-09T21:14:51.110088+0000 mon.a (mon.0) 539 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b9ca0fe4-bec8-42a3-9f19-f8c556e71c46"}]': finished 2026-03-09T21:14:52.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:51 vm10 bash[23387]: audit 2026-03-09T21:14:51.110088+0000 mon.a (mon.0) 539 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b9ca0fe4-bec8-42a3-9f19-f8c556e71c46"}]': finished 2026-03-09T21:14:52.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:51 vm10 bash[23387]: cluster 2026-03-09T21:14:51.116044+0000 mon.a (mon.0) 540 : cluster [DBG] osdmap e39: 7 total, 6 up, 7 in 2026-03-09T21:14:52.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:51 vm10 bash[23387]: cluster 2026-03-09T21:14:51.116044+0000 mon.a (mon.0) 540 : cluster [DBG] osdmap e39: 7 total, 6 up, 7 in 2026-03-09T21:14:52.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:51 vm10 bash[23387]: audit 2026-03-09T21:14:51.116497+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:14:52.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:51 vm10 bash[23387]: audit 2026-03-09T21:14:51.116497+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:14:52.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:51 vm10 bash[23387]: audit 2026-03-09T21:14:51.812135+0000 mon.b (mon.1) 19 : audit [DBG] from='client.? 192.168.123.110:0/4181626789' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:14:52.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:51 vm10 bash[23387]: audit 2026-03-09T21:14:51.812135+0000 mon.b (mon.1) 19 : audit [DBG] from='client.? 192.168.123.110:0/4181626789' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:14:54.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:54 vm07 bash[28052]: cluster 2026-03-09T21:14:52.616567+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:54.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:54 vm07 bash[28052]: cluster 2026-03-09T21:14:52.616567+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:54.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:54 vm07 bash[20771]: cluster 2026-03-09T21:14:52.616567+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:54.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:54 vm07 bash[20771]: cluster 2026-03-09T21:14:52.616567+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:54.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:54 vm10 bash[23387]: cluster 2026-03-09T21:14:52.616567+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:54.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:54 vm10 bash[23387]: cluster 2026-03-09T21:14:52.616567+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:56.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:56 vm10 bash[23387]: cluster 2026-03-09T21:14:54.616889+0000 mgr.y (mgr.14150) 195 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:56.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:56 vm10 bash[23387]: cluster 2026-03-09T21:14:54.616889+0000 mgr.y (mgr.14150) 195 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:56.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:56 vm07 bash[28052]: cluster 2026-03-09T21:14:54.616889+0000 mgr.y (mgr.14150) 195 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:56.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:56 vm07 bash[28052]: cluster 2026-03-09T21:14:54.616889+0000 mgr.y (mgr.14150) 195 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:56.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:56 vm07 bash[20771]: cluster 2026-03-09T21:14:54.616889+0000 mgr.y (mgr.14150) 195 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:56.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:56 vm07 bash[20771]: cluster 2026-03-09T21:14:54.616889+0000 mgr.y (mgr.14150) 195 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:58.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:58 vm10 bash[23387]: cluster 2026-03-09T21:14:56.617343+0000 mgr.y (mgr.14150) 196 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:58.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:14:58 vm10 bash[23387]: cluster 2026-03-09T21:14:56.617343+0000 mgr.y (mgr.14150) 196 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:58.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:58 vm07 bash[20771]: cluster 2026-03-09T21:14:56.617343+0000 mgr.y (mgr.14150) 196 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:58.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:14:58 vm07 bash[20771]: cluster 2026-03-09T21:14:56.617343+0000 mgr.y (mgr.14150) 196 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:58.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:58 vm07 bash[28052]: cluster 2026-03-09T21:14:56.617343+0000 mgr.y (mgr.14150) 196 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:14:58.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:14:58 vm07 bash[28052]: cluster 2026-03-09T21:14:56.617343+0000 mgr.y (mgr.14150) 196 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:00.687 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:00 vm10 bash[23387]: cluster 2026-03-09T21:14:58.617694+0000 mgr.y (mgr.14150) 197 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:00.687 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:00 vm10 bash[23387]: cluster 2026-03-09T21:14:58.617694+0000 mgr.y (mgr.14150) 197 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:00.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:00 vm07 bash[20771]: cluster 2026-03-09T21:14:58.617694+0000 mgr.y (mgr.14150) 197 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:00.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:00 vm07 bash[20771]: cluster 2026-03-09T21:14:58.617694+0000 mgr.y (mgr.14150) 197 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:00.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:00 vm07 bash[28052]: cluster 2026-03-09T21:14:58.617694+0000 mgr.y (mgr.14150) 197 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:00.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:00 vm07 bash[28052]: cluster 2026-03-09T21:14:58.617694+0000 mgr.y (mgr.14150) 197 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:01.807 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:01 vm10 bash[23387]: audit 2026-03-09T21:15:01.253011+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T21:15:01.807 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:01 vm10 bash[23387]: audit 2026-03-09T21:15:01.253011+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T21:15:01.807 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:01 vm10 bash[23387]: audit 2026-03-09T21:15:01.253726+0000 mon.a (mon.0) 543 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:01.807 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:01 vm10 bash[23387]: audit 2026-03-09T21:15:01.253726+0000 mon.a (mon.0) 543 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:02.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:01 vm07 bash[20771]: audit 2026-03-09T21:15:01.253011+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T21:15:02.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:01 vm07 bash[20771]: audit 2026-03-09T21:15:01.253011+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T21:15:02.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:01 vm07 bash[20771]: audit 2026-03-09T21:15:01.253726+0000 mon.a (mon.0) 543 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:02.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:01 vm07 bash[20771]: audit 2026-03-09T21:15:01.253726+0000 mon.a (mon.0) 543 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:02.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:01 vm07 bash[28052]: audit 2026-03-09T21:15:01.253011+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T21:15:02.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:01 vm07 bash[28052]: audit 2026-03-09T21:15:01.253011+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T21:15:02.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:01 vm07 bash[28052]: audit 2026-03-09T21:15:01.253726+0000 mon.a (mon.0) 543 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:02.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:01 vm07 bash[28052]: audit 2026-03-09T21:15:01.253726+0000 mon.a (mon.0) 543 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:02.670 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:02 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:15:02.670 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:02 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:15:02.670 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:15:02 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:15:02.670 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:15:02 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:15:02.670 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:15:02 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:15:02.670 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:15:02 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:15:02.671 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:15:02 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:15:02.671 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:15:02 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:15:02.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:02 vm10 bash[23387]: cluster 2026-03-09T21:15:00.618145+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:02.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:02 vm10 bash[23387]: cluster 2026-03-09T21:15:00.618145+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:02.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:02 vm10 bash[23387]: cephadm 2026-03-09T21:15:01.254292+0000 mgr.y (mgr.14150) 199 : cephadm [INF] Deploying daemon osd.6 on vm10 2026-03-09T21:15:02.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:02 vm10 bash[23387]: cephadm 2026-03-09T21:15:01.254292+0000 mgr.y (mgr.14150) 199 : cephadm [INF] Deploying daemon osd.6 on vm10 2026-03-09T21:15:02.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:02 vm10 bash[23387]: audit 2026-03-09T21:15:02.709639+0000 mon.a (mon.0) 544 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:15:02.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:02 vm10 bash[23387]: audit 2026-03-09T21:15:02.709639+0000 mon.a (mon.0) 544 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:15:02.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:02 vm10 bash[23387]: audit 2026-03-09T21:15:02.726282+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:02.950 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:02 vm10 bash[23387]: audit 2026-03-09T21:15:02.726282+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:03.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:02 vm07 bash[20771]: cluster 2026-03-09T21:15:00.618145+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:03.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:02 vm07 bash[20771]: cluster 2026-03-09T21:15:00.618145+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:03.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:02 vm07 bash[20771]: cephadm 2026-03-09T21:15:01.254292+0000 mgr.y (mgr.14150) 199 : cephadm [INF] Deploying daemon osd.6 on vm10 2026-03-09T21:15:03.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:02 vm07 bash[20771]: cephadm 2026-03-09T21:15:01.254292+0000 mgr.y (mgr.14150) 199 : cephadm [INF] Deploying daemon osd.6 on vm10 2026-03-09T21:15:03.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:02 vm07 bash[20771]: audit 2026-03-09T21:15:02.709639+0000 mon.a (mon.0) 544 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:15:03.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:02 vm07 bash[20771]: audit 2026-03-09T21:15:02.709639+0000 mon.a (mon.0) 544 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:15:03.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:02 vm07 bash[20771]: audit 2026-03-09T21:15:02.726282+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:03.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:02 vm07 bash[20771]: audit 2026-03-09T21:15:02.726282+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:03.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:02 vm07 bash[28052]: cluster 2026-03-09T21:15:00.618145+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:03.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:02 vm07 bash[28052]: cluster 2026-03-09T21:15:00.618145+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:03.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:02 vm07 bash[28052]: cephadm 2026-03-09T21:15:01.254292+0000 mgr.y (mgr.14150) 199 : cephadm [INF] Deploying daemon osd.6 on vm10 2026-03-09T21:15:03.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:02 vm07 bash[28052]: cephadm 2026-03-09T21:15:01.254292+0000 mgr.y (mgr.14150) 199 : cephadm [INF] Deploying daemon osd.6 on vm10 2026-03-09T21:15:03.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:02 vm07 bash[28052]: audit 2026-03-09T21:15:02.709639+0000 mon.a (mon.0) 544 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:15:03.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:02 vm07 bash[28052]: audit 2026-03-09T21:15:02.709639+0000 mon.a (mon.0) 544 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:15:03.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:02 vm07 bash[28052]: audit 2026-03-09T21:15:02.726282+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:03.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:02 vm07 bash[28052]: audit 2026-03-09T21:15:02.726282+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:04.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:03 vm07 bash[20771]: cluster 2026-03-09T21:15:02.618565+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:04.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:03 vm07 bash[20771]: cluster 2026-03-09T21:15:02.618565+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:04.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:03 vm07 bash[20771]: audit 2026-03-09T21:15:02.738973+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:04.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:03 vm07 bash[20771]: audit 2026-03-09T21:15:02.738973+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:04.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:03 vm07 bash[28052]: cluster 2026-03-09T21:15:02.618565+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:04.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:03 vm07 bash[28052]: cluster 2026-03-09T21:15:02.618565+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:04.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:03 vm07 bash[28052]: audit 2026-03-09T21:15:02.738973+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:04.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:03 vm07 bash[28052]: audit 2026-03-09T21:15:02.738973+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:04.122 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:03 vm10 bash[23387]: cluster 2026-03-09T21:15:02.618565+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:04.122 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:03 vm10 bash[23387]: cluster 2026-03-09T21:15:02.618565+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:04.122 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:03 vm10 bash[23387]: audit 2026-03-09T21:15:02.738973+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:04.122 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:03 vm10 bash[23387]: audit 2026-03-09T21:15:02.738973+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:05.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:05 vm10 bash[23387]: cluster 2026-03-09T21:15:04.618864+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:05.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:05 vm10 bash[23387]: cluster 2026-03-09T21:15:04.618864+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:06.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:05 vm07 bash[20771]: cluster 2026-03-09T21:15:04.618864+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:06.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:05 vm07 bash[20771]: cluster 2026-03-09T21:15:04.618864+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:06.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:05 vm07 bash[28052]: cluster 2026-03-09T21:15:04.618864+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:06.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:05 vm07 bash[28052]: cluster 2026-03-09T21:15:04.618864+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:08.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:07 vm07 bash[20771]: cluster 2026-03-09T21:15:06.619198+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:08.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:07 vm07 bash[20771]: cluster 2026-03-09T21:15:06.619198+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:08.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:07 vm07 bash[20771]: audit 2026-03-09T21:15:07.129707+0000 mon.b (mon.1) 20 : audit [INF] from='osd.6 v2:192.168.123.110:6808/646422706' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T21:15:08.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:07 vm07 bash[20771]: audit 2026-03-09T21:15:07.129707+0000 mon.b (mon.1) 20 : audit [INF] from='osd.6 v2:192.168.123.110:6808/646422706' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T21:15:08.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:07 vm07 bash[20771]: audit 2026-03-09T21:15:07.130677+0000 mon.a (mon.0) 547 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T21:15:08.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:07 vm07 bash[20771]: audit 2026-03-09T21:15:07.130677+0000 mon.a (mon.0) 547 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T21:15:08.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:07 vm07 bash[28052]: cluster 2026-03-09T21:15:06.619198+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:08.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:07 vm07 bash[28052]: cluster 2026-03-09T21:15:06.619198+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:08.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:07 vm07 bash[28052]: audit 2026-03-09T21:15:07.129707+0000 mon.b (mon.1) 20 : audit [INF] from='osd.6 v2:192.168.123.110:6808/646422706' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T21:15:08.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:07 vm07 bash[28052]: audit 2026-03-09T21:15:07.129707+0000 mon.b (mon.1) 20 : audit [INF] from='osd.6 v2:192.168.123.110:6808/646422706' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T21:15:08.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:07 vm07 bash[28052]: audit 2026-03-09T21:15:07.130677+0000 mon.a (mon.0) 547 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T21:15:08.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:07 vm07 bash[28052]: audit 2026-03-09T21:15:07.130677+0000 mon.a (mon.0) 547 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T21:15:08.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:07 vm10 bash[23387]: cluster 2026-03-09T21:15:06.619198+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:08.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:07 vm10 bash[23387]: cluster 2026-03-09T21:15:06.619198+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:08.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:07 vm10 bash[23387]: audit 2026-03-09T21:15:07.129707+0000 mon.b (mon.1) 20 : audit [INF] from='osd.6 v2:192.168.123.110:6808/646422706' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T21:15:08.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:07 vm10 bash[23387]: audit 2026-03-09T21:15:07.129707+0000 mon.b (mon.1) 20 : audit [INF] from='osd.6 v2:192.168.123.110:6808/646422706' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T21:15:08.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:07 vm10 bash[23387]: audit 2026-03-09T21:15:07.130677+0000 mon.a (mon.0) 547 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T21:15:08.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:07 vm10 bash[23387]: audit 2026-03-09T21:15:07.130677+0000 mon.a (mon.0) 547 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T21:15:09.134 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:08 vm10 bash[23387]: audit 2026-03-09T21:15:07.840645+0000 mon.a (mon.0) 548 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T21:15:09.134 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:08 vm10 bash[23387]: audit 2026-03-09T21:15:07.840645+0000 mon.a (mon.0) 548 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T21:15:09.134 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:08 vm10 bash[23387]: audit 2026-03-09T21:15:07.845747+0000 mon.b (mon.1) 21 : audit [INF] from='osd.6 v2:192.168.123.110:6808/646422706' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:15:09.134 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:08 vm10 bash[23387]: audit 2026-03-09T21:15:07.845747+0000 mon.b (mon.1) 21 : audit [INF] from='osd.6 v2:192.168.123.110:6808/646422706' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:15:09.134 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:08 vm10 bash[23387]: cluster 2026-03-09T21:15:07.846998+0000 mon.a (mon.0) 549 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-09T21:15:09.134 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:08 vm10 bash[23387]: cluster 2026-03-09T21:15:07.846998+0000 mon.a (mon.0) 549 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-09T21:15:09.134 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:08 vm10 bash[23387]: audit 2026-03-09T21:15:07.848471+0000 mon.a (mon.0) 550 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:09.134 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:08 vm10 bash[23387]: audit 2026-03-09T21:15:07.848471+0000 mon.a (mon.0) 550 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:09.134 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:08 vm10 bash[23387]: audit 2026-03-09T21:15:07.848563+0000 mon.a (mon.0) 551 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:15:09.134 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:08 vm10 bash[23387]: audit 2026-03-09T21:15:07.848563+0000 mon.a (mon.0) 551 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:15:09.134 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:08 vm10 bash[23387]: audit 2026-03-09T21:15:08.846110+0000 mon.a (mon.0) 552 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-09T21:15:09.134 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:08 vm10 bash[23387]: audit 2026-03-09T21:15:08.846110+0000 mon.a (mon.0) 552 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-09T21:15:09.134 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:08 vm10 bash[23387]: cluster 2026-03-09T21:15:08.854131+0000 mon.a (mon.0) 553 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-09T21:15:09.135 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:08 vm10 bash[23387]: cluster 2026-03-09T21:15:08.854131+0000 mon.a (mon.0) 553 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-09T21:15:09.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:08 vm07 bash[20771]: audit 2026-03-09T21:15:07.840645+0000 mon.a (mon.0) 548 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T21:15:09.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:08 vm07 bash[20771]: audit 2026-03-09T21:15:07.840645+0000 mon.a (mon.0) 548 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T21:15:09.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:08 vm07 bash[20771]: audit 2026-03-09T21:15:07.845747+0000 mon.b (mon.1) 21 : audit [INF] from='osd.6 v2:192.168.123.110:6808/646422706' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:15:09.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:08 vm07 bash[20771]: audit 2026-03-09T21:15:07.845747+0000 mon.b (mon.1) 21 : audit [INF] from='osd.6 v2:192.168.123.110:6808/646422706' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:15:09.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:08 vm07 bash[20771]: cluster 2026-03-09T21:15:07.846998+0000 mon.a (mon.0) 549 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-09T21:15:09.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:08 vm07 bash[20771]: cluster 2026-03-09T21:15:07.846998+0000 mon.a (mon.0) 549 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-09T21:15:09.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:08 vm07 bash[20771]: audit 2026-03-09T21:15:07.848471+0000 mon.a (mon.0) 550 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:09.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:08 vm07 bash[20771]: audit 2026-03-09T21:15:07.848471+0000 mon.a (mon.0) 550 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:09.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:08 vm07 bash[20771]: audit 2026-03-09T21:15:07.848563+0000 mon.a (mon.0) 551 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:15:09.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:08 vm07 bash[20771]: audit 2026-03-09T21:15:07.848563+0000 mon.a (mon.0) 551 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:15:09.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:08 vm07 bash[20771]: audit 2026-03-09T21:15:08.846110+0000 mon.a (mon.0) 552 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-09T21:15:09.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:08 vm07 bash[20771]: audit 2026-03-09T21:15:08.846110+0000 mon.a (mon.0) 552 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-09T21:15:09.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:08 vm07 bash[20771]: cluster 2026-03-09T21:15:08.854131+0000 mon.a (mon.0) 553 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-09T21:15:09.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:08 vm07 bash[20771]: cluster 2026-03-09T21:15:08.854131+0000 mon.a (mon.0) 553 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-09T21:15:09.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:08 vm07 bash[28052]: audit 2026-03-09T21:15:07.840645+0000 mon.a (mon.0) 548 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T21:15:09.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:08 vm07 bash[28052]: audit 2026-03-09T21:15:07.840645+0000 mon.a (mon.0) 548 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T21:15:09.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:08 vm07 bash[28052]: audit 2026-03-09T21:15:07.845747+0000 mon.b (mon.1) 21 : audit [INF] from='osd.6 v2:192.168.123.110:6808/646422706' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:15:09.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:08 vm07 bash[28052]: audit 2026-03-09T21:15:07.845747+0000 mon.b (mon.1) 21 : audit [INF] from='osd.6 v2:192.168.123.110:6808/646422706' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:15:09.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:08 vm07 bash[28052]: cluster 2026-03-09T21:15:07.846998+0000 mon.a (mon.0) 549 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-09T21:15:09.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:08 vm07 bash[28052]: cluster 2026-03-09T21:15:07.846998+0000 mon.a (mon.0) 549 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-09T21:15:09.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:08 vm07 bash[28052]: audit 2026-03-09T21:15:07.848471+0000 mon.a (mon.0) 550 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:09.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:08 vm07 bash[28052]: audit 2026-03-09T21:15:07.848471+0000 mon.a (mon.0) 550 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:09.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:08 vm07 bash[28052]: audit 2026-03-09T21:15:07.848563+0000 mon.a (mon.0) 551 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:15:09.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:08 vm07 bash[28052]: audit 2026-03-09T21:15:07.848563+0000 mon.a (mon.0) 551 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:15:09.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:08 vm07 bash[28052]: audit 2026-03-09T21:15:08.846110+0000 mon.a (mon.0) 552 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-09T21:15:09.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:08 vm07 bash[28052]: audit 2026-03-09T21:15:08.846110+0000 mon.a (mon.0) 552 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-09T21:15:09.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:08 vm07 bash[28052]: cluster 2026-03-09T21:15:08.854131+0000 mon.a (mon.0) 553 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-09T21:15:09.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:08 vm07 bash[28052]: cluster 2026-03-09T21:15:08.854131+0000 mon.a (mon.0) 553 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-09T21:15:10.145 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:09 vm10 bash[23387]: cluster 2026-03-09T21:15:08.619569+0000 mgr.y (mgr.14150) 203 : cluster [DBG] pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:10.145 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:09 vm10 bash[23387]: cluster 2026-03-09T21:15:08.619569+0000 mgr.y (mgr.14150) 203 : cluster [DBG] pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:10.145 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:09 vm10 bash[23387]: audit 2026-03-09T21:15:08.855300+0000 mon.a (mon.0) 554 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:10.145 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:09 vm10 bash[23387]: audit 2026-03-09T21:15:08.855300+0000 mon.a (mon.0) 554 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:10.145 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:09 vm10 bash[23387]: audit 2026-03-09T21:15:08.859064+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:10.145 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:09 vm10 bash[23387]: audit 2026-03-09T21:15:08.859064+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:10.145 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:09 vm10 bash[23387]: cluster 2026-03-09T21:15:08.994225+0000 mon.a (mon.0) 556 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-09T21:15:10.145 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:09 vm10 bash[23387]: cluster 2026-03-09T21:15:08.994225+0000 mon.a (mon.0) 556 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-09T21:15:10.145 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:09 vm10 bash[23387]: audit 2026-03-09T21:15:08.994509+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:10.145 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:09 vm10 bash[23387]: audit 2026-03-09T21:15:08.994509+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:10.145 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:09 vm10 bash[23387]: audit 2026-03-09T21:15:09.210498+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:10.145 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:09 vm10 bash[23387]: audit 2026-03-09T21:15:09.210498+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:10.145 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:09 vm10 bash[23387]: audit 2026-03-09T21:15:09.230228+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:10.145 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:09 vm10 bash[23387]: audit 2026-03-09T21:15:09.230228+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:10.145 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:09 vm10 bash[23387]: audit 2026-03-09T21:15:09.857386+0000 mon.a (mon.0) 560 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:10.146 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:09 vm10 bash[23387]: audit 2026-03-09T21:15:09.857386+0000 mon.a (mon.0) 560 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:10.146 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:09 vm10 bash[23387]: audit 2026-03-09T21:15:09.858280+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:15:10.146 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:09 vm10 bash[23387]: audit 2026-03-09T21:15:09.858280+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:15:10.146 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:09 vm10 bash[23387]: audit 2026-03-09T21:15:09.858746+0000 mon.a (mon.0) 562 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:10.146 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:09 vm10 bash[23387]: audit 2026-03-09T21:15:09.858746+0000 mon.a (mon.0) 562 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:10.146 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:09 vm10 bash[23387]: audit 2026-03-09T21:15:09.867531+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:10.146 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:09 vm10 bash[23387]: audit 2026-03-09T21:15:09.867531+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:10.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:09 vm07 bash[20771]: cluster 2026-03-09T21:15:08.619569+0000 mgr.y (mgr.14150) 203 : cluster [DBG] pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:10.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:09 vm07 bash[20771]: cluster 2026-03-09T21:15:08.619569+0000 mgr.y (mgr.14150) 203 : cluster [DBG] pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:10.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:09 vm07 bash[20771]: audit 2026-03-09T21:15:08.855300+0000 mon.a (mon.0) 554 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:10.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:09 vm07 bash[20771]: audit 2026-03-09T21:15:08.855300+0000 mon.a (mon.0) 554 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:10.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:09 vm07 bash[20771]: audit 2026-03-09T21:15:08.859064+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:10.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:09 vm07 bash[20771]: audit 2026-03-09T21:15:08.859064+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:10.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:09 vm07 bash[20771]: cluster 2026-03-09T21:15:08.994225+0000 mon.a (mon.0) 556 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-09T21:15:10.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:09 vm07 bash[20771]: cluster 2026-03-09T21:15:08.994225+0000 mon.a (mon.0) 556 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-09T21:15:10.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:09 vm07 bash[20771]: audit 2026-03-09T21:15:08.994509+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:10.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:09 vm07 bash[20771]: audit 2026-03-09T21:15:08.994509+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:10.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:09 vm07 bash[20771]: audit 2026-03-09T21:15:09.210498+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:10.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:09 vm07 bash[20771]: audit 2026-03-09T21:15:09.210498+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:10.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:09 vm07 bash[20771]: audit 2026-03-09T21:15:09.230228+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:10.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:09 vm07 bash[20771]: audit 2026-03-09T21:15:09.230228+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:10.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:09 vm07 bash[20771]: audit 2026-03-09T21:15:09.857386+0000 mon.a (mon.0) 560 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:10.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:09 vm07 bash[20771]: audit 2026-03-09T21:15:09.857386+0000 mon.a (mon.0) 560 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:10.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:09 vm07 bash[20771]: audit 2026-03-09T21:15:09.858280+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:15:10.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:09 vm07 bash[20771]: audit 2026-03-09T21:15:09.858280+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:15:10.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:09 vm07 bash[20771]: audit 2026-03-09T21:15:09.858746+0000 mon.a (mon.0) 562 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:10.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:09 vm07 bash[20771]: audit 2026-03-09T21:15:09.858746+0000 mon.a (mon.0) 562 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:10.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:09 vm07 bash[20771]: audit 2026-03-09T21:15:09.867531+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:10.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:09 vm07 bash[20771]: audit 2026-03-09T21:15:09.867531+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:10.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:09 vm07 bash[28052]: cluster 2026-03-09T21:15:08.619569+0000 mgr.y (mgr.14150) 203 : cluster [DBG] pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:10.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:09 vm07 bash[28052]: cluster 2026-03-09T21:15:08.619569+0000 mgr.y (mgr.14150) 203 : cluster [DBG] pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:10.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:09 vm07 bash[28052]: audit 2026-03-09T21:15:08.855300+0000 mon.a (mon.0) 554 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:10.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:09 vm07 bash[28052]: audit 2026-03-09T21:15:08.855300+0000 mon.a (mon.0) 554 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:10.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:09 vm07 bash[28052]: audit 2026-03-09T21:15:08.859064+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:10.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:09 vm07 bash[28052]: audit 2026-03-09T21:15:08.859064+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:10.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:09 vm07 bash[28052]: cluster 2026-03-09T21:15:08.994225+0000 mon.a (mon.0) 556 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-09T21:15:10.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:09 vm07 bash[28052]: cluster 2026-03-09T21:15:08.994225+0000 mon.a (mon.0) 556 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-09T21:15:10.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:09 vm07 bash[28052]: audit 2026-03-09T21:15:08.994509+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:10.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:09 vm07 bash[28052]: audit 2026-03-09T21:15:08.994509+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:10.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:09 vm07 bash[28052]: audit 2026-03-09T21:15:09.210498+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:10.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:09 vm07 bash[28052]: audit 2026-03-09T21:15:09.210498+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:10.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:09 vm07 bash[28052]: audit 2026-03-09T21:15:09.230228+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:10.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:09 vm07 bash[28052]: audit 2026-03-09T21:15:09.230228+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:10.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:09 vm07 bash[28052]: audit 2026-03-09T21:15:09.857386+0000 mon.a (mon.0) 560 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:10.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:09 vm07 bash[28052]: audit 2026-03-09T21:15:09.857386+0000 mon.a (mon.0) 560 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:10.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:09 vm07 bash[28052]: audit 2026-03-09T21:15:09.858280+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:15:10.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:09 vm07 bash[28052]: audit 2026-03-09T21:15:09.858280+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:15:10.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:09 vm07 bash[28052]: audit 2026-03-09T21:15:09.858746+0000 mon.a (mon.0) 562 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:10.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:09 vm07 bash[28052]: audit 2026-03-09T21:15:09.858746+0000 mon.a (mon.0) 562 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:10.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:09 vm07 bash[28052]: audit 2026-03-09T21:15:09.867531+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:10.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:09 vm07 bash[28052]: audit 2026-03-09T21:15:09.867531+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:10.966 INFO:teuthology.orchestra.run.vm10.stdout:Created osd(s) 6 on host 'vm10' 2026-03-09T21:15:11.063 DEBUG:teuthology.orchestra.run.vm10:osd.6> sudo journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@osd.6.service 2026-03-09T21:15:11.064 INFO:tasks.cephadm:Deploying osd.7 on vm10 with /dev/vdb... 2026-03-09T21:15:11.064 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- lvm zap /dev/vdb 2026-03-09T21:15:11.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:10 vm07 bash[20771]: cluster 2026-03-09T21:15:08.102663+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:15:11.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:10 vm07 bash[20771]: cluster 2026-03-09T21:15:08.102663+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:15:11.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:10 vm07 bash[20771]: cluster 2026-03-09T21:15:08.102732+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:15:11.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:10 vm07 bash[20771]: cluster 2026-03-09T21:15:08.102732+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:15:11.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:10 vm07 bash[20771]: cluster 2026-03-09T21:15:09.997807+0000 mon.a (mon.0) 564 : cluster [DBG] osdmap e43: 7 total, 6 up, 7 in 2026-03-09T21:15:11.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:10 vm07 bash[20771]: cluster 2026-03-09T21:15:09.997807+0000 mon.a (mon.0) 564 : cluster [DBG] osdmap e43: 7 total, 6 up, 7 in 2026-03-09T21:15:11.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:10 vm07 bash[20771]: audit 2026-03-09T21:15:09.998150+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:11.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:10 vm07 bash[20771]: audit 2026-03-09T21:15:09.998150+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:11.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:10 vm07 bash[20771]: audit 2026-03-09T21:15:10.140662+0000 mon.a (mon.0) 566 : audit [INF] from='osd.6 ' entity='osd.6' 2026-03-09T21:15:11.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:10 vm07 bash[20771]: audit 2026-03-09T21:15:10.140662+0000 mon.a (mon.0) 566 : audit [INF] from='osd.6 ' entity='osd.6' 2026-03-09T21:15:11.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:10 vm07 bash[20771]: audit 2026-03-09T21:15:10.774982+0000 mon.a (mon.0) 567 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:15:11.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:10 vm07 bash[20771]: audit 2026-03-09T21:15:10.774982+0000 mon.a (mon.0) 567 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:15:11.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:10 vm07 bash[20771]: audit 2026-03-09T21:15:10.858275+0000 mon.a (mon.0) 568 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:11.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:10 vm07 bash[20771]: audit 2026-03-09T21:15:10.858275+0000 mon.a (mon.0) 568 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:11.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:10 vm07 bash[20771]: audit 2026-03-09T21:15:10.949646+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:11.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:10 vm07 bash[20771]: audit 2026-03-09T21:15:10.949646+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:11.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:10 vm07 bash[20771]: audit 2026-03-09T21:15:10.963839+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:11.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:10 vm07 bash[20771]: audit 2026-03-09T21:15:10.963839+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:11.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:10 vm07 bash[28052]: cluster 2026-03-09T21:15:08.102663+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:15:11.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:10 vm07 bash[28052]: cluster 2026-03-09T21:15:08.102663+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:15:11.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:10 vm07 bash[28052]: cluster 2026-03-09T21:15:08.102732+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:15:11.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:10 vm07 bash[28052]: cluster 2026-03-09T21:15:08.102732+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:15:11.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:10 vm07 bash[28052]: cluster 2026-03-09T21:15:09.997807+0000 mon.a (mon.0) 564 : cluster [DBG] osdmap e43: 7 total, 6 up, 7 in 2026-03-09T21:15:11.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:10 vm07 bash[28052]: cluster 2026-03-09T21:15:09.997807+0000 mon.a (mon.0) 564 : cluster [DBG] osdmap e43: 7 total, 6 up, 7 in 2026-03-09T21:15:11.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:10 vm07 bash[28052]: audit 2026-03-09T21:15:09.998150+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:11.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:10 vm07 bash[28052]: audit 2026-03-09T21:15:09.998150+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:11.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:10 vm07 bash[28052]: audit 2026-03-09T21:15:10.140662+0000 mon.a (mon.0) 566 : audit [INF] from='osd.6 ' entity='osd.6' 2026-03-09T21:15:11.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:10 vm07 bash[28052]: audit 2026-03-09T21:15:10.140662+0000 mon.a (mon.0) 566 : audit [INF] from='osd.6 ' entity='osd.6' 2026-03-09T21:15:11.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:10 vm07 bash[28052]: audit 2026-03-09T21:15:10.774982+0000 mon.a (mon.0) 567 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:15:11.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:10 vm07 bash[28052]: audit 2026-03-09T21:15:10.774982+0000 mon.a (mon.0) 567 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:15:11.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:10 vm07 bash[28052]: audit 2026-03-09T21:15:10.858275+0000 mon.a (mon.0) 568 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:11.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:10 vm07 bash[28052]: audit 2026-03-09T21:15:10.858275+0000 mon.a (mon.0) 568 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:11.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:10 vm07 bash[28052]: audit 2026-03-09T21:15:10.949646+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:11.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:10 vm07 bash[28052]: audit 2026-03-09T21:15:10.949646+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:11.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:10 vm07 bash[28052]: audit 2026-03-09T21:15:10.963839+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:11.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:10 vm07 bash[28052]: audit 2026-03-09T21:15:10.963839+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:11.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:10 vm10 bash[23387]: cluster 2026-03-09T21:15:08.102663+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:15:11.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:10 vm10 bash[23387]: cluster 2026-03-09T21:15:08.102663+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:15:11.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:10 vm10 bash[23387]: cluster 2026-03-09T21:15:08.102732+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:15:11.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:10 vm10 bash[23387]: cluster 2026-03-09T21:15:08.102732+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:15:11.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:10 vm10 bash[23387]: cluster 2026-03-09T21:15:09.997807+0000 mon.a (mon.0) 564 : cluster [DBG] osdmap e43: 7 total, 6 up, 7 in 2026-03-09T21:15:11.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:10 vm10 bash[23387]: cluster 2026-03-09T21:15:09.997807+0000 mon.a (mon.0) 564 : cluster [DBG] osdmap e43: 7 total, 6 up, 7 in 2026-03-09T21:15:11.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:10 vm10 bash[23387]: audit 2026-03-09T21:15:09.998150+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:11.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:10 vm10 bash[23387]: audit 2026-03-09T21:15:09.998150+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:11.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:10 vm10 bash[23387]: audit 2026-03-09T21:15:10.140662+0000 mon.a (mon.0) 566 : audit [INF] from='osd.6 ' entity='osd.6' 2026-03-09T21:15:11.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:10 vm10 bash[23387]: audit 2026-03-09T21:15:10.140662+0000 mon.a (mon.0) 566 : audit [INF] from='osd.6 ' entity='osd.6' 2026-03-09T21:15:11.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:10 vm10 bash[23387]: audit 2026-03-09T21:15:10.774982+0000 mon.a (mon.0) 567 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:15:11.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:10 vm10 bash[23387]: audit 2026-03-09T21:15:10.774982+0000 mon.a (mon.0) 567 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:15:11.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:10 vm10 bash[23387]: audit 2026-03-09T21:15:10.858275+0000 mon.a (mon.0) 568 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:11.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:10 vm10 bash[23387]: audit 2026-03-09T21:15:10.858275+0000 mon.a (mon.0) 568 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:11.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:10 vm10 bash[23387]: audit 2026-03-09T21:15:10.949646+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:11.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:10 vm10 bash[23387]: audit 2026-03-09T21:15:10.949646+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:11.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:10 vm10 bash[23387]: audit 2026-03-09T21:15:10.963839+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:11.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:10 vm10 bash[23387]: audit 2026-03-09T21:15:10.963839+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:12 vm07 bash[20771]: cluster 2026-03-09T21:15:10.619946+0000 mgr.y (mgr.14150) 204 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:12 vm07 bash[20771]: cluster 2026-03-09T21:15:10.619946+0000 mgr.y (mgr.14150) 204 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:12 vm07 bash[20771]: cluster 2026-03-09T21:15:11.147668+0000 mon.a (mon.0) 571 : cluster [INF] osd.6 v2:192.168.123.110:6808/646422706 boot 2026-03-09T21:15:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:12 vm07 bash[20771]: cluster 2026-03-09T21:15:11.147668+0000 mon.a (mon.0) 571 : cluster [INF] osd.6 v2:192.168.123.110:6808/646422706 boot 2026-03-09T21:15:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:12 vm07 bash[20771]: cluster 2026-03-09T21:15:11.147805+0000 mon.a (mon.0) 572 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-09T21:15:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:12 vm07 bash[20771]: cluster 2026-03-09T21:15:11.147805+0000 mon.a (mon.0) 572 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-09T21:15:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:12 vm07 bash[20771]: audit 2026-03-09T21:15:11.150120+0000 mon.a (mon.0) 573 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:12 vm07 bash[20771]: audit 2026-03-09T21:15:11.150120+0000 mon.a (mon.0) 573 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:12 vm07 bash[28052]: cluster 2026-03-09T21:15:10.619946+0000 mgr.y (mgr.14150) 204 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:12 vm07 bash[28052]: cluster 2026-03-09T21:15:10.619946+0000 mgr.y (mgr.14150) 204 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:12 vm07 bash[28052]: cluster 2026-03-09T21:15:11.147668+0000 mon.a (mon.0) 571 : cluster [INF] osd.6 v2:192.168.123.110:6808/646422706 boot 2026-03-09T21:15:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:12 vm07 bash[28052]: cluster 2026-03-09T21:15:11.147668+0000 mon.a (mon.0) 571 : cluster [INF] osd.6 v2:192.168.123.110:6808/646422706 boot 2026-03-09T21:15:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:12 vm07 bash[28052]: cluster 2026-03-09T21:15:11.147805+0000 mon.a (mon.0) 572 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-09T21:15:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:12 vm07 bash[28052]: cluster 2026-03-09T21:15:11.147805+0000 mon.a (mon.0) 572 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-09T21:15:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:12 vm07 bash[28052]: audit 2026-03-09T21:15:11.150120+0000 mon.a (mon.0) 573 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:12 vm07 bash[28052]: audit 2026-03-09T21:15:11.150120+0000 mon.a (mon.0) 573 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:12 vm10 bash[23387]: cluster 2026-03-09T21:15:10.619946+0000 mgr.y (mgr.14150) 204 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:12 vm10 bash[23387]: cluster 2026-03-09T21:15:10.619946+0000 mgr.y (mgr.14150) 204 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T21:15:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:12 vm10 bash[23387]: cluster 2026-03-09T21:15:11.147668+0000 mon.a (mon.0) 571 : cluster [INF] osd.6 v2:192.168.123.110:6808/646422706 boot 2026-03-09T21:15:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:12 vm10 bash[23387]: cluster 2026-03-09T21:15:11.147668+0000 mon.a (mon.0) 571 : cluster [INF] osd.6 v2:192.168.123.110:6808/646422706 boot 2026-03-09T21:15:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:12 vm10 bash[23387]: cluster 2026-03-09T21:15:11.147805+0000 mon.a (mon.0) 572 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-09T21:15:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:12 vm10 bash[23387]: cluster 2026-03-09T21:15:11.147805+0000 mon.a (mon.0) 572 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-09T21:15:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:12 vm10 bash[23387]: audit 2026-03-09T21:15:11.150120+0000 mon.a (mon.0) 573 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:12 vm10 bash[23387]: audit 2026-03-09T21:15:11.150120+0000 mon.a (mon.0) 573 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:15:13.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:13 vm07 bash[20771]: cluster 2026-03-09T21:15:12.291692+0000 mon.a (mon.0) 574 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-09T21:15:13.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:13 vm07 bash[20771]: cluster 2026-03-09T21:15:12.291692+0000 mon.a (mon.0) 574 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-09T21:15:13.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:13 vm07 bash[28052]: cluster 2026-03-09T21:15:12.291692+0000 mon.a (mon.0) 574 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-09T21:15:13.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:13 vm07 bash[28052]: cluster 2026-03-09T21:15:12.291692+0000 mon.a (mon.0) 574 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-09T21:15:13.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:13 vm10 bash[23387]: cluster 2026-03-09T21:15:12.291692+0000 mon.a (mon.0) 574 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-09T21:15:13.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:13 vm10 bash[23387]: cluster 2026-03-09T21:15:12.291692+0000 mon.a (mon.0) 574 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-09T21:15:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:14 vm07 bash[20771]: cluster 2026-03-09T21:15:12.620285+0000 mgr.y (mgr.14150) 205 : cluster [DBG] pgmap v184: 1 pgs: 1 active+clean+remapped; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 2/6 objects misplaced (33.333%) 2026-03-09T21:15:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:14 vm07 bash[20771]: cluster 2026-03-09T21:15:12.620285+0000 mgr.y (mgr.14150) 205 : cluster [DBG] pgmap v184: 1 pgs: 1 active+clean+remapped; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 2/6 objects misplaced (33.333%) 2026-03-09T21:15:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:14 vm07 bash[20771]: cluster 2026-03-09T21:15:13.312685+0000 mon.a (mon.0) 575 : cluster [DBG] osdmap e46: 7 total, 7 up, 7 in 2026-03-09T21:15:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:14 vm07 bash[20771]: cluster 2026-03-09T21:15:13.312685+0000 mon.a (mon.0) 575 : cluster [DBG] osdmap e46: 7 total, 7 up, 7 in 2026-03-09T21:15:14.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:14 vm07 bash[28052]: cluster 2026-03-09T21:15:12.620285+0000 mgr.y (mgr.14150) 205 : cluster [DBG] pgmap v184: 1 pgs: 1 active+clean+remapped; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 2/6 objects misplaced (33.333%) 2026-03-09T21:15:14.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:14 vm07 bash[28052]: cluster 2026-03-09T21:15:12.620285+0000 mgr.y (mgr.14150) 205 : cluster [DBG] pgmap v184: 1 pgs: 1 active+clean+remapped; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 2/6 objects misplaced (33.333%) 2026-03-09T21:15:14.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:14 vm07 bash[28052]: cluster 2026-03-09T21:15:13.312685+0000 mon.a (mon.0) 575 : cluster [DBG] osdmap e46: 7 total, 7 up, 7 in 2026-03-09T21:15:14.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:14 vm07 bash[28052]: cluster 2026-03-09T21:15:13.312685+0000 mon.a (mon.0) 575 : cluster [DBG] osdmap e46: 7 total, 7 up, 7 in 2026-03-09T21:15:14.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:14 vm10 bash[23387]: cluster 2026-03-09T21:15:12.620285+0000 mgr.y (mgr.14150) 205 : cluster [DBG] pgmap v184: 1 pgs: 1 active+clean+remapped; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 2/6 objects misplaced (33.333%) 2026-03-09T21:15:14.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:14 vm10 bash[23387]: cluster 2026-03-09T21:15:12.620285+0000 mgr.y (mgr.14150) 205 : cluster [DBG] pgmap v184: 1 pgs: 1 active+clean+remapped; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 2/6 objects misplaced (33.333%) 2026-03-09T21:15:14.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:14 vm10 bash[23387]: cluster 2026-03-09T21:15:13.312685+0000 mon.a (mon.0) 575 : cluster [DBG] osdmap e46: 7 total, 7 up, 7 in 2026-03-09T21:15:14.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:14 vm10 bash[23387]: cluster 2026-03-09T21:15:13.312685+0000 mon.a (mon.0) 575 : cluster [DBG] osdmap e46: 7 total, 7 up, 7 in 2026-03-09T21:15:15.793 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.b/config 2026-03-09T21:15:15.854 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:15 vm10 bash[23387]: cluster 2026-03-09T21:15:14.620534+0000 mgr.y (mgr.14150) 206 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean+remapped; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 2/6 objects misplaced (33.333%) 2026-03-09T21:15:15.855 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:15 vm10 bash[23387]: cluster 2026-03-09T21:15:14.620534+0000 mgr.y (mgr.14150) 206 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean+remapped; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 2/6 objects misplaced (33.333%) 2026-03-09T21:15:15.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:15 vm07 bash[20771]: cluster 2026-03-09T21:15:14.620534+0000 mgr.y (mgr.14150) 206 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean+remapped; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 2/6 objects misplaced (33.333%) 2026-03-09T21:15:15.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:15 vm07 bash[20771]: cluster 2026-03-09T21:15:14.620534+0000 mgr.y (mgr.14150) 206 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean+remapped; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 2/6 objects misplaced (33.333%) 2026-03-09T21:15:15.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:15 vm07 bash[28052]: cluster 2026-03-09T21:15:14.620534+0000 mgr.y (mgr.14150) 206 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean+remapped; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 2/6 objects misplaced (33.333%) 2026-03-09T21:15:15.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:15 vm07 bash[28052]: cluster 2026-03-09T21:15:14.620534+0000 mgr.y (mgr.14150) 206 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean+remapped; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 2/6 objects misplaced (33.333%) 2026-03-09T21:15:17.536 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:15:17.551 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph orch daemon add osd vm10:/dev/vdb 2026-03-09T21:15:17.763 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:17 vm10 bash[23387]: cluster 2026-03-09T21:15:16.620853+0000 mgr.y (mgr.14150) 207 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean+remapped; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 2/6 objects misplaced (33.333%) 2026-03-09T21:15:17.763 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:17 vm10 bash[23387]: cluster 2026-03-09T21:15:16.620853+0000 mgr.y (mgr.14150) 207 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean+remapped; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 2/6 objects misplaced (33.333%) 2026-03-09T21:15:17.764 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:17 vm10 bash[23387]: cephadm 2026-03-09T21:15:16.751530+0000 mgr.y (mgr.14150) 208 : cephadm [INF] Detected new or changed devices on vm10 2026-03-09T21:15:17.764 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:17 vm10 bash[23387]: cephadm 2026-03-09T21:15:16.751530+0000 mgr.y (mgr.14150) 208 : cephadm [INF] Detected new or changed devices on vm10 2026-03-09T21:15:17.764 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:17 vm10 bash[23387]: audit 2026-03-09T21:15:16.758321+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:17.764 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:17 vm10 bash[23387]: audit 2026-03-09T21:15:16.758321+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:17.764 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:17 vm10 bash[23387]: audit 2026-03-09T21:15:16.765448+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:17.764 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:17 vm10 bash[23387]: audit 2026-03-09T21:15:16.765448+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:17.764 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:17 vm10 bash[23387]: audit 2026-03-09T21:15:16.766517+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:17.764 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:17 vm10 bash[23387]: audit 2026-03-09T21:15:16.766517+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:17.764 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:17 vm10 bash[23387]: audit 2026-03-09T21:15:16.767540+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:17.764 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:17 vm10 bash[23387]: audit 2026-03-09T21:15:16.767540+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:17.764 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:17 vm10 bash[23387]: audit 2026-03-09T21:15:16.768020+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:17.764 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:17 vm10 bash[23387]: audit 2026-03-09T21:15:16.768020+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:17.764 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:17 vm10 bash[23387]: cephadm 2026-03-09T21:15:16.768379+0000 mgr.y (mgr.14150) 209 : cephadm [INF] Adjusting osd_memory_target on vm10 to 151.9M 2026-03-09T21:15:17.764 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:17 vm10 bash[23387]: cephadm 2026-03-09T21:15:16.768379+0000 mgr.y (mgr.14150) 209 : cephadm [INF] Adjusting osd_memory_target on vm10 to 151.9M 2026-03-09T21:15:17.764 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:17 vm10 bash[23387]: cephadm 2026-03-09T21:15:16.768872+0000 mgr.y (mgr.14150) 210 : cephadm [WRN] Unable to set osd_memory_target on vm10 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-09T21:15:17.764 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:17 vm10 bash[23387]: cephadm 2026-03-09T21:15:16.768872+0000 mgr.y (mgr.14150) 210 : cephadm [WRN] Unable to set osd_memory_target on vm10 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-09T21:15:17.764 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:17 vm10 bash[23387]: audit 2026-03-09T21:15:16.769295+0000 mon.a (mon.0) 581 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:17.764 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:17 vm10 bash[23387]: audit 2026-03-09T21:15:16.769295+0000 mon.a (mon.0) 581 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:17.764 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:17 vm10 bash[23387]: audit 2026-03-09T21:15:16.769771+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:15:17.764 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:17 vm10 bash[23387]: audit 2026-03-09T21:15:16.769771+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:15:17.764 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:17 vm10 bash[23387]: audit 2026-03-09T21:15:16.775474+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:17.764 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:17 vm10 bash[23387]: audit 2026-03-09T21:15:16.775474+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:18.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:17 vm07 bash[20771]: cluster 2026-03-09T21:15:16.620853+0000 mgr.y (mgr.14150) 207 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean+remapped; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 2/6 objects misplaced (33.333%) 2026-03-09T21:15:18.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:17 vm07 bash[20771]: cluster 2026-03-09T21:15:16.620853+0000 mgr.y (mgr.14150) 207 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean+remapped; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 2/6 objects misplaced (33.333%) 2026-03-09T21:15:18.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:17 vm07 bash[20771]: cephadm 2026-03-09T21:15:16.751530+0000 mgr.y (mgr.14150) 208 : cephadm [INF] Detected new or changed devices on vm10 2026-03-09T21:15:18.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:17 vm07 bash[20771]: cephadm 2026-03-09T21:15:16.751530+0000 mgr.y (mgr.14150) 208 : cephadm [INF] Detected new or changed devices on vm10 2026-03-09T21:15:18.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:17 vm07 bash[20771]: audit 2026-03-09T21:15:16.758321+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:18.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:17 vm07 bash[20771]: audit 2026-03-09T21:15:16.758321+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:18.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:17 vm07 bash[20771]: audit 2026-03-09T21:15:16.765448+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:18.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:17 vm07 bash[20771]: audit 2026-03-09T21:15:16.765448+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:18.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:17 vm07 bash[20771]: audit 2026-03-09T21:15:16.766517+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:18.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:17 vm07 bash[20771]: audit 2026-03-09T21:15:16.766517+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:17 vm07 bash[20771]: audit 2026-03-09T21:15:16.767540+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:17 vm07 bash[20771]: audit 2026-03-09T21:15:16.767540+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:17 vm07 bash[20771]: audit 2026-03-09T21:15:16.768020+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:17 vm07 bash[20771]: audit 2026-03-09T21:15:16.768020+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:17 vm07 bash[20771]: cephadm 2026-03-09T21:15:16.768379+0000 mgr.y (mgr.14150) 209 : cephadm [INF] Adjusting osd_memory_target on vm10 to 151.9M 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:17 vm07 bash[20771]: cephadm 2026-03-09T21:15:16.768379+0000 mgr.y (mgr.14150) 209 : cephadm [INF] Adjusting osd_memory_target on vm10 to 151.9M 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:17 vm07 bash[20771]: cephadm 2026-03-09T21:15:16.768872+0000 mgr.y (mgr.14150) 210 : cephadm [WRN] Unable to set osd_memory_target on vm10 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:17 vm07 bash[20771]: cephadm 2026-03-09T21:15:16.768872+0000 mgr.y (mgr.14150) 210 : cephadm [WRN] Unable to set osd_memory_target on vm10 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:17 vm07 bash[20771]: audit 2026-03-09T21:15:16.769295+0000 mon.a (mon.0) 581 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:17 vm07 bash[20771]: audit 2026-03-09T21:15:16.769295+0000 mon.a (mon.0) 581 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:17 vm07 bash[20771]: audit 2026-03-09T21:15:16.769771+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:17 vm07 bash[20771]: audit 2026-03-09T21:15:16.769771+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:17 vm07 bash[20771]: audit 2026-03-09T21:15:16.775474+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:17 vm07 bash[20771]: audit 2026-03-09T21:15:16.775474+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:17 vm07 bash[28052]: cluster 2026-03-09T21:15:16.620853+0000 mgr.y (mgr.14150) 207 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean+remapped; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 2/6 objects misplaced (33.333%) 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:17 vm07 bash[28052]: cluster 2026-03-09T21:15:16.620853+0000 mgr.y (mgr.14150) 207 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean+remapped; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 2/6 objects misplaced (33.333%) 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:17 vm07 bash[28052]: cephadm 2026-03-09T21:15:16.751530+0000 mgr.y (mgr.14150) 208 : cephadm [INF] Detected new or changed devices on vm10 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:17 vm07 bash[28052]: cephadm 2026-03-09T21:15:16.751530+0000 mgr.y (mgr.14150) 208 : cephadm [INF] Detected new or changed devices on vm10 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:17 vm07 bash[28052]: audit 2026-03-09T21:15:16.758321+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:17 vm07 bash[28052]: audit 2026-03-09T21:15:16.758321+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:17 vm07 bash[28052]: audit 2026-03-09T21:15:16.765448+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:17 vm07 bash[28052]: audit 2026-03-09T21:15:16.765448+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:17 vm07 bash[28052]: audit 2026-03-09T21:15:16.766517+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:17 vm07 bash[28052]: audit 2026-03-09T21:15:16.766517+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:17 vm07 bash[28052]: audit 2026-03-09T21:15:16.767540+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:17 vm07 bash[28052]: audit 2026-03-09T21:15:16.767540+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:17 vm07 bash[28052]: audit 2026-03-09T21:15:16.768020+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:17 vm07 bash[28052]: audit 2026-03-09T21:15:16.768020+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:17 vm07 bash[28052]: cephadm 2026-03-09T21:15:16.768379+0000 mgr.y (mgr.14150) 209 : cephadm [INF] Adjusting osd_memory_target on vm10 to 151.9M 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:17 vm07 bash[28052]: cephadm 2026-03-09T21:15:16.768379+0000 mgr.y (mgr.14150) 209 : cephadm [INF] Adjusting osd_memory_target on vm10 to 151.9M 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:17 vm07 bash[28052]: cephadm 2026-03-09T21:15:16.768872+0000 mgr.y (mgr.14150) 210 : cephadm [WRN] Unable to set osd_memory_target on vm10 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:17 vm07 bash[28052]: cephadm 2026-03-09T21:15:16.768872+0000 mgr.y (mgr.14150) 210 : cephadm [WRN] Unable to set osd_memory_target on vm10 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:17 vm07 bash[28052]: audit 2026-03-09T21:15:16.769295+0000 mon.a (mon.0) 581 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:18.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:17 vm07 bash[28052]: audit 2026-03-09T21:15:16.769295+0000 mon.a (mon.0) 581 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:18.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:17 vm07 bash[28052]: audit 2026-03-09T21:15:16.769771+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:15:18.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:17 vm07 bash[28052]: audit 2026-03-09T21:15:16.769771+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:15:18.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:17 vm07 bash[28052]: audit 2026-03-09T21:15:16.775474+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:18.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:17 vm07 bash[28052]: audit 2026-03-09T21:15:16.775474+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:20.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:19 vm07 bash[20771]: cluster 2026-03-09T21:15:18.621266+0000 mgr.y (mgr.14150) 211 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 60 KiB/s, 0 objects/s recovering 2026-03-09T21:15:20.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:19 vm07 bash[20771]: cluster 2026-03-09T21:15:18.621266+0000 mgr.y (mgr.14150) 211 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 60 KiB/s, 0 objects/s recovering 2026-03-09T21:15:20.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:19 vm07 bash[28052]: cluster 2026-03-09T21:15:18.621266+0000 mgr.y (mgr.14150) 211 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 60 KiB/s, 0 objects/s recovering 2026-03-09T21:15:20.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:19 vm07 bash[28052]: cluster 2026-03-09T21:15:18.621266+0000 mgr.y (mgr.14150) 211 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 60 KiB/s, 0 objects/s recovering 2026-03-09T21:15:20.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:19 vm10 bash[23387]: cluster 2026-03-09T21:15:18.621266+0000 mgr.y (mgr.14150) 211 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 60 KiB/s, 0 objects/s recovering 2026-03-09T21:15:20.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:19 vm10 bash[23387]: cluster 2026-03-09T21:15:18.621266+0000 mgr.y (mgr.14150) 211 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 60 KiB/s, 0 objects/s recovering 2026-03-09T21:15:22.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:21 vm07 bash[20771]: cluster 2026-03-09T21:15:20.621706+0000 mgr.y (mgr.14150) 212 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 54 KiB/s, 0 objects/s recovering 2026-03-09T21:15:22.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:21 vm07 bash[20771]: cluster 2026-03-09T21:15:20.621706+0000 mgr.y (mgr.14150) 212 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 54 KiB/s, 0 objects/s recovering 2026-03-09T21:15:22.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:21 vm07 bash[28052]: cluster 2026-03-09T21:15:20.621706+0000 mgr.y (mgr.14150) 212 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 54 KiB/s, 0 objects/s recovering 2026-03-09T21:15:22.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:21 vm07 bash[28052]: cluster 2026-03-09T21:15:20.621706+0000 mgr.y (mgr.14150) 212 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 54 KiB/s, 0 objects/s recovering 2026-03-09T21:15:22.185 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.b/config 2026-03-09T21:15:22.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:21 vm10 bash[23387]: cluster 2026-03-09T21:15:20.621706+0000 mgr.y (mgr.14150) 212 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 54 KiB/s, 0 objects/s recovering 2026-03-09T21:15:22.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:21 vm10 bash[23387]: cluster 2026-03-09T21:15:20.621706+0000 mgr.y (mgr.14150) 212 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 54 KiB/s, 0 objects/s recovering 2026-03-09T21:15:23.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:22 vm07 bash[20771]: audit 2026-03-09T21:15:22.478571+0000 mgr.y (mgr.14150) 213 : audit [DBG] from='client.24277 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:15:23.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:22 vm07 bash[20771]: audit 2026-03-09T21:15:22.478571+0000 mgr.y (mgr.14150) 213 : audit [DBG] from='client.24277 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:15:23.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:22 vm07 bash[20771]: audit 2026-03-09T21:15:22.480372+0000 mon.a (mon.0) 584 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:15:23.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:22 vm07 bash[20771]: audit 2026-03-09T21:15:22.480372+0000 mon.a (mon.0) 584 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:15:23.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:22 vm07 bash[20771]: audit 2026-03-09T21:15:22.482920+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:15:23.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:22 vm07 bash[20771]: audit 2026-03-09T21:15:22.482920+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:15:23.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:22 vm07 bash[20771]: audit 2026-03-09T21:15:22.483685+0000 mon.a (mon.0) 586 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:23.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:22 vm07 bash[20771]: audit 2026-03-09T21:15:22.483685+0000 mon.a (mon.0) 586 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:23.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:22 vm07 bash[28052]: audit 2026-03-09T21:15:22.478571+0000 mgr.y (mgr.14150) 213 : audit [DBG] from='client.24277 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:15:23.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:22 vm07 bash[28052]: audit 2026-03-09T21:15:22.478571+0000 mgr.y (mgr.14150) 213 : audit [DBG] from='client.24277 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:15:23.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:22 vm07 bash[28052]: audit 2026-03-09T21:15:22.480372+0000 mon.a (mon.0) 584 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:15:23.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:22 vm07 bash[28052]: audit 2026-03-09T21:15:22.480372+0000 mon.a (mon.0) 584 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:15:23.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:22 vm07 bash[28052]: audit 2026-03-09T21:15:22.482920+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:15:23.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:22 vm07 bash[28052]: audit 2026-03-09T21:15:22.482920+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:15:23.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:22 vm07 bash[28052]: audit 2026-03-09T21:15:22.483685+0000 mon.a (mon.0) 586 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:23.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:22 vm07 bash[28052]: audit 2026-03-09T21:15:22.483685+0000 mon.a (mon.0) 586 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:23.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:22 vm10 bash[23387]: audit 2026-03-09T21:15:22.478571+0000 mgr.y (mgr.14150) 213 : audit [DBG] from='client.24277 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:15:23.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:22 vm10 bash[23387]: audit 2026-03-09T21:15:22.478571+0000 mgr.y (mgr.14150) 213 : audit [DBG] from='client.24277 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:15:23.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:22 vm10 bash[23387]: audit 2026-03-09T21:15:22.480372+0000 mon.a (mon.0) 584 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:15:23.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:22 vm10 bash[23387]: audit 2026-03-09T21:15:22.480372+0000 mon.a (mon.0) 584 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T21:15:23.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:22 vm10 bash[23387]: audit 2026-03-09T21:15:22.482920+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:15:23.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:22 vm10 bash[23387]: audit 2026-03-09T21:15:22.482920+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T21:15:23.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:22 vm10 bash[23387]: audit 2026-03-09T21:15:22.483685+0000 mon.a (mon.0) 586 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:23.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:22 vm10 bash[23387]: audit 2026-03-09T21:15:22.483685+0000 mon.a (mon.0) 586 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:24.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:23 vm07 bash[20771]: cluster 2026-03-09T21:15:22.622029+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-09T21:15:24.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:23 vm07 bash[20771]: cluster 2026-03-09T21:15:22.622029+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-09T21:15:24.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:23 vm07 bash[28052]: cluster 2026-03-09T21:15:22.622029+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-09T21:15:24.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:23 vm07 bash[28052]: cluster 2026-03-09T21:15:22.622029+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-09T21:15:24.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:23 vm10 bash[23387]: cluster 2026-03-09T21:15:22.622029+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-09T21:15:24.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:23 vm10 bash[23387]: cluster 2026-03-09T21:15:22.622029+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-09T21:15:26.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:25 vm07 bash[20771]: cluster 2026-03-09T21:15:24.622320+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-09T21:15:26.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:25 vm07 bash[20771]: cluster 2026-03-09T21:15:24.622320+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-09T21:15:26.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:25 vm07 bash[28052]: cluster 2026-03-09T21:15:24.622320+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-09T21:15:26.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:25 vm07 bash[28052]: cluster 2026-03-09T21:15:24.622320+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-09T21:15:26.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:25 vm10 bash[23387]: cluster 2026-03-09T21:15:24.622320+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-09T21:15:26.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:25 vm10 bash[23387]: cluster 2026-03-09T21:15:24.622320+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-09T21:15:28.114 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:27 vm07 bash[20771]: cluster 2026-03-09T21:15:26.622614+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T21:15:28.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:27 vm07 bash[20771]: cluster 2026-03-09T21:15:26.622614+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T21:15:28.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:27 vm07 bash[28052]: cluster 2026-03-09T21:15:26.622614+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T21:15:28.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:27 vm07 bash[28052]: cluster 2026-03-09T21:15:26.622614+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T21:15:28.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:27 vm10 bash[23387]: cluster 2026-03-09T21:15:26.622614+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T21:15:28.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:27 vm10 bash[23387]: cluster 2026-03-09T21:15:26.622614+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T21:15:29.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:28 vm07 bash[20771]: audit 2026-03-09T21:15:28.037910+0000 mon.b (mon.1) 22 : audit [INF] from='client.? 192.168.123.110:0/2296188372' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1f0752e8-2e42-4ee3-ac34-768b5409242e"}]: dispatch 2026-03-09T21:15:29.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:28 vm07 bash[20771]: audit 2026-03-09T21:15:28.037910+0000 mon.b (mon.1) 22 : audit [INF] from='client.? 192.168.123.110:0/2296188372' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1f0752e8-2e42-4ee3-ac34-768b5409242e"}]: dispatch 2026-03-09T21:15:29.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:28 vm07 bash[20771]: audit 2026-03-09T21:15:28.038678+0000 mon.a (mon.0) 587 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1f0752e8-2e42-4ee3-ac34-768b5409242e"}]: dispatch 2026-03-09T21:15:29.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:28 vm07 bash[20771]: audit 2026-03-09T21:15:28.038678+0000 mon.a (mon.0) 587 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1f0752e8-2e42-4ee3-ac34-768b5409242e"}]: dispatch 2026-03-09T21:15:29.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:28 vm07 bash[20771]: audit 2026-03-09T21:15:28.281842+0000 mon.a (mon.0) 588 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1f0752e8-2e42-4ee3-ac34-768b5409242e"}]': finished 2026-03-09T21:15:29.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:28 vm07 bash[20771]: audit 2026-03-09T21:15:28.281842+0000 mon.a (mon.0) 588 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1f0752e8-2e42-4ee3-ac34-768b5409242e"}]': finished 2026-03-09T21:15:29.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:28 vm07 bash[20771]: cluster 2026-03-09T21:15:28.391683+0000 mon.a (mon.0) 589 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-09T21:15:29.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:28 vm07 bash[20771]: cluster 2026-03-09T21:15:28.391683+0000 mon.a (mon.0) 589 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-09T21:15:29.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:28 vm07 bash[20771]: audit 2026-03-09T21:15:28.392266+0000 mon.a (mon.0) 590 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:29.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:28 vm07 bash[20771]: audit 2026-03-09T21:15:28.392266+0000 mon.a (mon.0) 590 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:29.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:28 vm07 bash[28052]: audit 2026-03-09T21:15:28.037910+0000 mon.b (mon.1) 22 : audit [INF] from='client.? 192.168.123.110:0/2296188372' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1f0752e8-2e42-4ee3-ac34-768b5409242e"}]: dispatch 2026-03-09T21:15:29.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:28 vm07 bash[28052]: audit 2026-03-09T21:15:28.037910+0000 mon.b (mon.1) 22 : audit [INF] from='client.? 192.168.123.110:0/2296188372' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1f0752e8-2e42-4ee3-ac34-768b5409242e"}]: dispatch 2026-03-09T21:15:29.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:28 vm07 bash[28052]: audit 2026-03-09T21:15:28.038678+0000 mon.a (mon.0) 587 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1f0752e8-2e42-4ee3-ac34-768b5409242e"}]: dispatch 2026-03-09T21:15:29.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:28 vm07 bash[28052]: audit 2026-03-09T21:15:28.038678+0000 mon.a (mon.0) 587 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1f0752e8-2e42-4ee3-ac34-768b5409242e"}]: dispatch 2026-03-09T21:15:29.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:28 vm07 bash[28052]: audit 2026-03-09T21:15:28.281842+0000 mon.a (mon.0) 588 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1f0752e8-2e42-4ee3-ac34-768b5409242e"}]': finished 2026-03-09T21:15:29.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:28 vm07 bash[28052]: audit 2026-03-09T21:15:28.281842+0000 mon.a (mon.0) 588 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1f0752e8-2e42-4ee3-ac34-768b5409242e"}]': finished 2026-03-09T21:15:29.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:28 vm07 bash[28052]: cluster 2026-03-09T21:15:28.391683+0000 mon.a (mon.0) 589 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-09T21:15:29.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:28 vm07 bash[28052]: cluster 2026-03-09T21:15:28.391683+0000 mon.a (mon.0) 589 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-09T21:15:29.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:28 vm07 bash[28052]: audit 2026-03-09T21:15:28.392266+0000 mon.a (mon.0) 590 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:29.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:28 vm07 bash[28052]: audit 2026-03-09T21:15:28.392266+0000 mon.a (mon.0) 590 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:29.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:28 vm10 bash[23387]: audit 2026-03-09T21:15:28.037910+0000 mon.b (mon.1) 22 : audit [INF] from='client.? 192.168.123.110:0/2296188372' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1f0752e8-2e42-4ee3-ac34-768b5409242e"}]: dispatch 2026-03-09T21:15:29.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:28 vm10 bash[23387]: audit 2026-03-09T21:15:28.037910+0000 mon.b (mon.1) 22 : audit [INF] from='client.? 192.168.123.110:0/2296188372' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1f0752e8-2e42-4ee3-ac34-768b5409242e"}]: dispatch 2026-03-09T21:15:29.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:28 vm10 bash[23387]: audit 2026-03-09T21:15:28.038678+0000 mon.a (mon.0) 587 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1f0752e8-2e42-4ee3-ac34-768b5409242e"}]: dispatch 2026-03-09T21:15:29.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:28 vm10 bash[23387]: audit 2026-03-09T21:15:28.038678+0000 mon.a (mon.0) 587 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1f0752e8-2e42-4ee3-ac34-768b5409242e"}]: dispatch 2026-03-09T21:15:29.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:28 vm10 bash[23387]: audit 2026-03-09T21:15:28.281842+0000 mon.a (mon.0) 588 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1f0752e8-2e42-4ee3-ac34-768b5409242e"}]': finished 2026-03-09T21:15:29.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:28 vm10 bash[23387]: audit 2026-03-09T21:15:28.281842+0000 mon.a (mon.0) 588 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1f0752e8-2e42-4ee3-ac34-768b5409242e"}]': finished 2026-03-09T21:15:29.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:28 vm10 bash[23387]: cluster 2026-03-09T21:15:28.391683+0000 mon.a (mon.0) 589 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-09T21:15:29.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:28 vm10 bash[23387]: cluster 2026-03-09T21:15:28.391683+0000 mon.a (mon.0) 589 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-09T21:15:29.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:28 vm10 bash[23387]: audit 2026-03-09T21:15:28.392266+0000 mon.a (mon.0) 590 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:29.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:28 vm10 bash[23387]: audit 2026-03-09T21:15:28.392266+0000 mon.a (mon.0) 590 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:30.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:29 vm10 bash[23387]: cluster 2026-03-09T21:15:28.622918+0000 mgr.y (mgr.14150) 217 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:30.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:29 vm10 bash[23387]: cluster 2026-03-09T21:15:28.622918+0000 mgr.y (mgr.14150) 217 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:30.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:29 vm10 bash[23387]: audit 2026-03-09T21:15:29.057970+0000 mon.b (mon.1) 23 : audit [DBG] from='client.? 192.168.123.110:0/1997243413' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:15:30.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:29 vm10 bash[23387]: audit 2026-03-09T21:15:29.057970+0000 mon.b (mon.1) 23 : audit [DBG] from='client.? 192.168.123.110:0/1997243413' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:15:30.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:29 vm07 bash[20771]: cluster 2026-03-09T21:15:28.622918+0000 mgr.y (mgr.14150) 217 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:30.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:29 vm07 bash[20771]: cluster 2026-03-09T21:15:28.622918+0000 mgr.y (mgr.14150) 217 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:30.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:29 vm07 bash[20771]: audit 2026-03-09T21:15:29.057970+0000 mon.b (mon.1) 23 : audit [DBG] from='client.? 192.168.123.110:0/1997243413' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:15:30.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:29 vm07 bash[20771]: audit 2026-03-09T21:15:29.057970+0000 mon.b (mon.1) 23 : audit [DBG] from='client.? 192.168.123.110:0/1997243413' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:15:30.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:29 vm07 bash[28052]: cluster 2026-03-09T21:15:28.622918+0000 mgr.y (mgr.14150) 217 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:30.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:29 vm07 bash[28052]: cluster 2026-03-09T21:15:28.622918+0000 mgr.y (mgr.14150) 217 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:30.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:29 vm07 bash[28052]: audit 2026-03-09T21:15:29.057970+0000 mon.b (mon.1) 23 : audit [DBG] from='client.? 192.168.123.110:0/1997243413' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:15:30.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:29 vm07 bash[28052]: audit 2026-03-09T21:15:29.057970+0000 mon.b (mon.1) 23 : audit [DBG] from='client.? 192.168.123.110:0/1997243413' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T21:15:32.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:31 vm10 bash[23387]: cluster 2026-03-09T21:15:30.623341+0000 mgr.y (mgr.14150) 218 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:32.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:31 vm10 bash[23387]: cluster 2026-03-09T21:15:30.623341+0000 mgr.y (mgr.14150) 218 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:32.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:31 vm07 bash[20771]: cluster 2026-03-09T21:15:30.623341+0000 mgr.y (mgr.14150) 218 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:32.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:31 vm07 bash[20771]: cluster 2026-03-09T21:15:30.623341+0000 mgr.y (mgr.14150) 218 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:32.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:31 vm07 bash[28052]: cluster 2026-03-09T21:15:30.623341+0000 mgr.y (mgr.14150) 218 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:32.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:31 vm07 bash[28052]: cluster 2026-03-09T21:15:30.623341+0000 mgr.y (mgr.14150) 218 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:34.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:33 vm10 bash[23387]: cluster 2026-03-09T21:15:32.623732+0000 mgr.y (mgr.14150) 219 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:34.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:33 vm10 bash[23387]: cluster 2026-03-09T21:15:32.623732+0000 mgr.y (mgr.14150) 219 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:34.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:33 vm07 bash[20771]: cluster 2026-03-09T21:15:32.623732+0000 mgr.y (mgr.14150) 219 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:34.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:33 vm07 bash[20771]: cluster 2026-03-09T21:15:32.623732+0000 mgr.y (mgr.14150) 219 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:34.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:33 vm07 bash[28052]: cluster 2026-03-09T21:15:32.623732+0000 mgr.y (mgr.14150) 219 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:34.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:33 vm07 bash[28052]: cluster 2026-03-09T21:15:32.623732+0000 mgr.y (mgr.14150) 219 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:35 vm10 bash[23387]: cluster 2026-03-09T21:15:34.624048+0000 mgr.y (mgr.14150) 220 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:35 vm10 bash[23387]: cluster 2026-03-09T21:15:34.624048+0000 mgr.y (mgr.14150) 220 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:36.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:35 vm07 bash[20771]: cluster 2026-03-09T21:15:34.624048+0000 mgr.y (mgr.14150) 220 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:36.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:35 vm07 bash[20771]: cluster 2026-03-09T21:15:34.624048+0000 mgr.y (mgr.14150) 220 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:36.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:35 vm07 bash[28052]: cluster 2026-03-09T21:15:34.624048+0000 mgr.y (mgr.14150) 220 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:36.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:35 vm07 bash[28052]: cluster 2026-03-09T21:15:34.624048+0000 mgr.y (mgr.14150) 220 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:37.912 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:37 vm10 bash[23387]: cluster 2026-03-09T21:15:36.624435+0000 mgr.y (mgr.14150) 221 : cluster [DBG] pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:37.913 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:37 vm10 bash[23387]: cluster 2026-03-09T21:15:36.624435+0000 mgr.y (mgr.14150) 221 : cluster [DBG] pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:37.913 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:37 vm10 bash[23387]: audit 2026-03-09T21:15:37.674284+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T21:15:37.913 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:37 vm10 bash[23387]: audit 2026-03-09T21:15:37.674284+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T21:15:37.913 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:37 vm10 bash[23387]: audit 2026-03-09T21:15:37.674907+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:37.913 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:37 vm10 bash[23387]: audit 2026-03-09T21:15:37.674907+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:38.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:37 vm07 bash[20771]: cluster 2026-03-09T21:15:36.624435+0000 mgr.y (mgr.14150) 221 : cluster [DBG] pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:38.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:37 vm07 bash[20771]: cluster 2026-03-09T21:15:36.624435+0000 mgr.y (mgr.14150) 221 : cluster [DBG] pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:38.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:37 vm07 bash[20771]: audit 2026-03-09T21:15:37.674284+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T21:15:38.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:37 vm07 bash[20771]: audit 2026-03-09T21:15:37.674284+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T21:15:38.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:37 vm07 bash[20771]: audit 2026-03-09T21:15:37.674907+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:38.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:37 vm07 bash[20771]: audit 2026-03-09T21:15:37.674907+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:38.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:37 vm07 bash[28052]: cluster 2026-03-09T21:15:36.624435+0000 mgr.y (mgr.14150) 221 : cluster [DBG] pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:38.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:37 vm07 bash[28052]: cluster 2026-03-09T21:15:36.624435+0000 mgr.y (mgr.14150) 221 : cluster [DBG] pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:38.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:37 vm07 bash[28052]: audit 2026-03-09T21:15:37.674284+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T21:15:38.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:37 vm07 bash[28052]: audit 2026-03-09T21:15:37.674284+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T21:15:38.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:37 vm07 bash[28052]: audit 2026-03-09T21:15:37.674907+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:38.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:37 vm07 bash[28052]: audit 2026-03-09T21:15:37.674907+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:39.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:38 vm10 bash[23387]: cephadm 2026-03-09T21:15:37.675381+0000 mgr.y (mgr.14150) 222 : cephadm [INF] Deploying daemon osd.7 on vm10 2026-03-09T21:15:39.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:38 vm10 bash[23387]: cephadm 2026-03-09T21:15:37.675381+0000 mgr.y (mgr.14150) 222 : cephadm [INF] Deploying daemon osd.7 on vm10 2026-03-09T21:15:39.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:38 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:15:39.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:39 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:15:39.192 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:15:38 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:15:39.193 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:15:39 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:15:39.193 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:15:38 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:15:39.193 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:15:39 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:15:39.193 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:15:38 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:15:39.193 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:15:39 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:15:39.193 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 09 21:15:38 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:15:39.193 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 09 21:15:39 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:15:39.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:38 vm07 bash[20771]: cephadm 2026-03-09T21:15:37.675381+0000 mgr.y (mgr.14150) 222 : cephadm [INF] Deploying daemon osd.7 on vm10 2026-03-09T21:15:39.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:38 vm07 bash[20771]: cephadm 2026-03-09T21:15:37.675381+0000 mgr.y (mgr.14150) 222 : cephadm [INF] Deploying daemon osd.7 on vm10 2026-03-09T21:15:39.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:38 vm07 bash[28052]: cephadm 2026-03-09T21:15:37.675381+0000 mgr.y (mgr.14150) 222 : cephadm [INF] Deploying daemon osd.7 on vm10 2026-03-09T21:15:39.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:38 vm07 bash[28052]: cephadm 2026-03-09T21:15:37.675381+0000 mgr.y (mgr.14150) 222 : cephadm [INF] Deploying daemon osd.7 on vm10 2026-03-09T21:15:40.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:39 vm10 bash[23387]: cluster 2026-03-09T21:15:38.624803+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:40.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:39 vm10 bash[23387]: cluster 2026-03-09T21:15:38.624803+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:40.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:39 vm10 bash[23387]: audit 2026-03-09T21:15:39.250171+0000 mon.a (mon.0) 593 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:15:40.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:39 vm10 bash[23387]: audit 2026-03-09T21:15:39.250171+0000 mon.a (mon.0) 593 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:15:40.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:39 vm10 bash[23387]: audit 2026-03-09T21:15:39.256518+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:40.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:39 vm10 bash[23387]: audit 2026-03-09T21:15:39.256518+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:40.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:39 vm10 bash[23387]: audit 2026-03-09T21:15:39.263609+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:40.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:39 vm10 bash[23387]: audit 2026-03-09T21:15:39.263609+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:39 vm07 bash[20771]: cluster 2026-03-09T21:15:38.624803+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:39 vm07 bash[20771]: cluster 2026-03-09T21:15:38.624803+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:39 vm07 bash[20771]: audit 2026-03-09T21:15:39.250171+0000 mon.a (mon.0) 593 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:15:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:39 vm07 bash[20771]: audit 2026-03-09T21:15:39.250171+0000 mon.a (mon.0) 593 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:15:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:39 vm07 bash[20771]: audit 2026-03-09T21:15:39.256518+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:39 vm07 bash[20771]: audit 2026-03-09T21:15:39.256518+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:39 vm07 bash[20771]: audit 2026-03-09T21:15:39.263609+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:39 vm07 bash[20771]: audit 2026-03-09T21:15:39.263609+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:39 vm07 bash[28052]: cluster 2026-03-09T21:15:38.624803+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:39 vm07 bash[28052]: cluster 2026-03-09T21:15:38.624803+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:39 vm07 bash[28052]: audit 2026-03-09T21:15:39.250171+0000 mon.a (mon.0) 593 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:15:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:39 vm07 bash[28052]: audit 2026-03-09T21:15:39.250171+0000 mon.a (mon.0) 593 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:15:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:39 vm07 bash[28052]: audit 2026-03-09T21:15:39.256518+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:39 vm07 bash[28052]: audit 2026-03-09T21:15:39.256518+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:39 vm07 bash[28052]: audit 2026-03-09T21:15:39.263609+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:39 vm07 bash[28052]: audit 2026-03-09T21:15:39.263609+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:42.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:41 vm10 bash[23387]: cluster 2026-03-09T21:15:40.625244+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:42.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:41 vm10 bash[23387]: cluster 2026-03-09T21:15:40.625244+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:42.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:41 vm07 bash[20771]: cluster 2026-03-09T21:15:40.625244+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:42.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:41 vm07 bash[20771]: cluster 2026-03-09T21:15:40.625244+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:42.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:41 vm07 bash[28052]: cluster 2026-03-09T21:15:40.625244+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:42.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:41 vm07 bash[28052]: cluster 2026-03-09T21:15:40.625244+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:44.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:43 vm07 bash[20771]: cluster 2026-03-09T21:15:42.625655+0000 mgr.y (mgr.14150) 225 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:44.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:43 vm07 bash[20771]: cluster 2026-03-09T21:15:42.625655+0000 mgr.y (mgr.14150) 225 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:44.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:43 vm07 bash[20771]: audit 2026-03-09T21:15:42.974473+0000 mon.b (mon.1) 24 : audit [INF] from='osd.7 v2:192.168.123.110:6812/2049527874' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T21:15:44.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:43 vm07 bash[20771]: audit 2026-03-09T21:15:42.974473+0000 mon.b (mon.1) 24 : audit [INF] from='osd.7 v2:192.168.123.110:6812/2049527874' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T21:15:44.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:43 vm07 bash[20771]: audit 2026-03-09T21:15:42.975310+0000 mon.a (mon.0) 596 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T21:15:44.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:43 vm07 bash[20771]: audit 2026-03-09T21:15:42.975310+0000 mon.a (mon.0) 596 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T21:15:44.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:43 vm07 bash[28052]: cluster 2026-03-09T21:15:42.625655+0000 mgr.y (mgr.14150) 225 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:44.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:43 vm07 bash[28052]: cluster 2026-03-09T21:15:42.625655+0000 mgr.y (mgr.14150) 225 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:44.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:43 vm07 bash[28052]: audit 2026-03-09T21:15:42.974473+0000 mon.b (mon.1) 24 : audit [INF] from='osd.7 v2:192.168.123.110:6812/2049527874' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T21:15:44.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:43 vm07 bash[28052]: audit 2026-03-09T21:15:42.974473+0000 mon.b (mon.1) 24 : audit [INF] from='osd.7 v2:192.168.123.110:6812/2049527874' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T21:15:44.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:43 vm07 bash[28052]: audit 2026-03-09T21:15:42.975310+0000 mon.a (mon.0) 596 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T21:15:44.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:43 vm07 bash[28052]: audit 2026-03-09T21:15:42.975310+0000 mon.a (mon.0) 596 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T21:15:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:43 vm10 bash[23387]: cluster 2026-03-09T21:15:42.625655+0000 mgr.y (mgr.14150) 225 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:43 vm10 bash[23387]: cluster 2026-03-09T21:15:42.625655+0000 mgr.y (mgr.14150) 225 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:43 vm10 bash[23387]: audit 2026-03-09T21:15:42.974473+0000 mon.b (mon.1) 24 : audit [INF] from='osd.7 v2:192.168.123.110:6812/2049527874' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T21:15:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:43 vm10 bash[23387]: audit 2026-03-09T21:15:42.974473+0000 mon.b (mon.1) 24 : audit [INF] from='osd.7 v2:192.168.123.110:6812/2049527874' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T21:15:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:43 vm10 bash[23387]: audit 2026-03-09T21:15:42.975310+0000 mon.a (mon.0) 596 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T21:15:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:43 vm10 bash[23387]: audit 2026-03-09T21:15:42.975310+0000 mon.a (mon.0) 596 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:44 vm07 bash[20771]: audit 2026-03-09T21:15:43.937633+0000 mon.a (mon.0) 597 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:44 vm07 bash[20771]: audit 2026-03-09T21:15:43.937633+0000 mon.a (mon.0) 597 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:44 vm07 bash[20771]: audit 2026-03-09T21:15:43.940823+0000 mon.b (mon.1) 25 : audit [INF] from='osd.7 v2:192.168.123.110:6812/2049527874' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:44 vm07 bash[20771]: audit 2026-03-09T21:15:43.940823+0000 mon.b (mon.1) 25 : audit [INF] from='osd.7 v2:192.168.123.110:6812/2049527874' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:44 vm07 bash[20771]: cluster 2026-03-09T21:15:43.943399+0000 mon.a (mon.0) 598 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:44 vm07 bash[20771]: cluster 2026-03-09T21:15:43.943399+0000 mon.a (mon.0) 598 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:44 vm07 bash[20771]: audit 2026-03-09T21:15:43.944514+0000 mon.a (mon.0) 599 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:44 vm07 bash[20771]: audit 2026-03-09T21:15:43.944514+0000 mon.a (mon.0) 599 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:44 vm07 bash[20771]: audit 2026-03-09T21:15:43.944977+0000 mon.a (mon.0) 600 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:44 vm07 bash[20771]: audit 2026-03-09T21:15:43.944977+0000 mon.a (mon.0) 600 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:44 vm07 bash[20771]: audit 2026-03-09T21:15:44.941304+0000 mon.a (mon.0) 601 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:44 vm07 bash[20771]: audit 2026-03-09T21:15:44.941304+0000 mon.a (mon.0) 601 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:44 vm07 bash[20771]: cluster 2026-03-09T21:15:44.947943+0000 mon.a (mon.0) 602 : cluster [DBG] osdmap e49: 8 total, 7 up, 8 in 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:44 vm07 bash[20771]: cluster 2026-03-09T21:15:44.947943+0000 mon.a (mon.0) 602 : cluster [DBG] osdmap e49: 8 total, 7 up, 8 in 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:44 vm07 bash[20771]: audit 2026-03-09T21:15:44.948396+0000 mon.a (mon.0) 603 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:44 vm07 bash[20771]: audit 2026-03-09T21:15:44.948396+0000 mon.a (mon.0) 603 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:44 vm07 bash[28052]: audit 2026-03-09T21:15:43.937633+0000 mon.a (mon.0) 597 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:44 vm07 bash[28052]: audit 2026-03-09T21:15:43.937633+0000 mon.a (mon.0) 597 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:44 vm07 bash[28052]: audit 2026-03-09T21:15:43.940823+0000 mon.b (mon.1) 25 : audit [INF] from='osd.7 v2:192.168.123.110:6812/2049527874' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:44 vm07 bash[28052]: audit 2026-03-09T21:15:43.940823+0000 mon.b (mon.1) 25 : audit [INF] from='osd.7 v2:192.168.123.110:6812/2049527874' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:44 vm07 bash[28052]: cluster 2026-03-09T21:15:43.943399+0000 mon.a (mon.0) 598 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:44 vm07 bash[28052]: cluster 2026-03-09T21:15:43.943399+0000 mon.a (mon.0) 598 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:44 vm07 bash[28052]: audit 2026-03-09T21:15:43.944514+0000 mon.a (mon.0) 599 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:44 vm07 bash[28052]: audit 2026-03-09T21:15:43.944514+0000 mon.a (mon.0) 599 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:44 vm07 bash[28052]: audit 2026-03-09T21:15:43.944977+0000 mon.a (mon.0) 600 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:44 vm07 bash[28052]: audit 2026-03-09T21:15:43.944977+0000 mon.a (mon.0) 600 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:44 vm07 bash[28052]: audit 2026-03-09T21:15:44.941304+0000 mon.a (mon.0) 601 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:44 vm07 bash[28052]: audit 2026-03-09T21:15:44.941304+0000 mon.a (mon.0) 601 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:44 vm07 bash[28052]: cluster 2026-03-09T21:15:44.947943+0000 mon.a (mon.0) 602 : cluster [DBG] osdmap e49: 8 total, 7 up, 8 in 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:44 vm07 bash[28052]: cluster 2026-03-09T21:15:44.947943+0000 mon.a (mon.0) 602 : cluster [DBG] osdmap e49: 8 total, 7 up, 8 in 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:44 vm07 bash[28052]: audit 2026-03-09T21:15:44.948396+0000 mon.a (mon.0) 603 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:44 vm07 bash[28052]: audit 2026-03-09T21:15:44.948396+0000 mon.a (mon.0) 603 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:45.370 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:44 vm10 bash[23387]: audit 2026-03-09T21:15:43.937633+0000 mon.a (mon.0) 597 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T21:15:45.370 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:44 vm10 bash[23387]: audit 2026-03-09T21:15:43.937633+0000 mon.a (mon.0) 597 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T21:15:45.370 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:44 vm10 bash[23387]: audit 2026-03-09T21:15:43.940823+0000 mon.b (mon.1) 25 : audit [INF] from='osd.7 v2:192.168.123.110:6812/2049527874' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:15:45.370 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:44 vm10 bash[23387]: audit 2026-03-09T21:15:43.940823+0000 mon.b (mon.1) 25 : audit [INF] from='osd.7 v2:192.168.123.110:6812/2049527874' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:15:45.370 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:44 vm10 bash[23387]: cluster 2026-03-09T21:15:43.943399+0000 mon.a (mon.0) 598 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-09T21:15:45.370 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:44 vm10 bash[23387]: cluster 2026-03-09T21:15:43.943399+0000 mon.a (mon.0) 598 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-09T21:15:45.370 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:44 vm10 bash[23387]: audit 2026-03-09T21:15:43.944514+0000 mon.a (mon.0) 599 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:45.370 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:44 vm10 bash[23387]: audit 2026-03-09T21:15:43.944514+0000 mon.a (mon.0) 599 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:45.370 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:44 vm10 bash[23387]: audit 2026-03-09T21:15:43.944977+0000 mon.a (mon.0) 600 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:15:45.371 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:44 vm10 bash[23387]: audit 2026-03-09T21:15:43.944977+0000 mon.a (mon.0) 600 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T21:15:45.371 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:44 vm10 bash[23387]: audit 2026-03-09T21:15:44.941304+0000 mon.a (mon.0) 601 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-09T21:15:45.371 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:44 vm10 bash[23387]: audit 2026-03-09T21:15:44.941304+0000 mon.a (mon.0) 601 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-09T21:15:45.371 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:44 vm10 bash[23387]: cluster 2026-03-09T21:15:44.947943+0000 mon.a (mon.0) 602 : cluster [DBG] osdmap e49: 8 total, 7 up, 8 in 2026-03-09T21:15:45.371 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:44 vm10 bash[23387]: cluster 2026-03-09T21:15:44.947943+0000 mon.a (mon.0) 602 : cluster [DBG] osdmap e49: 8 total, 7 up, 8 in 2026-03-09T21:15:45.371 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:44 vm10 bash[23387]: audit 2026-03-09T21:15:44.948396+0000 mon.a (mon.0) 603 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:45.371 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:44 vm10 bash[23387]: audit 2026-03-09T21:15:44.948396+0000 mon.a (mon.0) 603 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:46 vm07 bash[20771]: cluster 2026-03-09T21:15:44.625949+0000 mgr.y (mgr.14150) 226 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:46 vm07 bash[20771]: cluster 2026-03-09T21:15:44.625949+0000 mgr.y (mgr.14150) 226 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:46 vm07 bash[20771]: audit 2026-03-09T21:15:44.950753+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:46 vm07 bash[20771]: audit 2026-03-09T21:15:44.950753+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:46 vm07 bash[20771]: audit 2026-03-09T21:15:45.631617+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:46 vm07 bash[20771]: audit 2026-03-09T21:15:45.631617+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:46 vm07 bash[20771]: audit 2026-03-09T21:15:45.640740+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:46 vm07 bash[20771]: audit 2026-03-09T21:15:45.640740+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:46 vm07 bash[20771]: audit 2026-03-09T21:15:45.641770+0000 mon.a (mon.0) 607 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:46 vm07 bash[20771]: audit 2026-03-09T21:15:45.641770+0000 mon.a (mon.0) 607 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:46 vm07 bash[20771]: audit 2026-03-09T21:15:45.642416+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:15:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:46 vm07 bash[20771]: audit 2026-03-09T21:15:45.642416+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:15:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:46 vm07 bash[20771]: audit 2026-03-09T21:15:45.663489+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:46 vm07 bash[20771]: audit 2026-03-09T21:15:45.663489+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:46.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:46 vm07 bash[28052]: cluster 2026-03-09T21:15:44.625949+0000 mgr.y (mgr.14150) 226 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:46.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:46 vm07 bash[28052]: cluster 2026-03-09T21:15:44.625949+0000 mgr.y (mgr.14150) 226 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:46.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:46 vm07 bash[28052]: audit 2026-03-09T21:15:44.950753+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:46.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:46 vm07 bash[28052]: audit 2026-03-09T21:15:44.950753+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:46.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:46 vm07 bash[28052]: audit 2026-03-09T21:15:45.631617+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:46.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:46 vm07 bash[28052]: audit 2026-03-09T21:15:45.631617+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:46.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:46 vm07 bash[28052]: audit 2026-03-09T21:15:45.640740+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:46.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:46 vm07 bash[28052]: audit 2026-03-09T21:15:45.640740+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:46.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:46 vm07 bash[28052]: audit 2026-03-09T21:15:45.641770+0000 mon.a (mon.0) 607 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:46.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:46 vm07 bash[28052]: audit 2026-03-09T21:15:45.641770+0000 mon.a (mon.0) 607 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:46.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:46 vm07 bash[28052]: audit 2026-03-09T21:15:45.642416+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:15:46.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:46 vm07 bash[28052]: audit 2026-03-09T21:15:45.642416+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:15:46.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:46 vm07 bash[28052]: audit 2026-03-09T21:15:45.663489+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:46.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:46 vm07 bash[28052]: audit 2026-03-09T21:15:45.663489+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:46.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:46 vm10 bash[23387]: cluster 2026-03-09T21:15:44.625949+0000 mgr.y (mgr.14150) 226 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:46.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:46 vm10 bash[23387]: cluster 2026-03-09T21:15:44.625949+0000 mgr.y (mgr.14150) 226 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:46.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:46 vm10 bash[23387]: audit 2026-03-09T21:15:44.950753+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:46.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:46 vm10 bash[23387]: audit 2026-03-09T21:15:44.950753+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:46.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:46 vm10 bash[23387]: audit 2026-03-09T21:15:45.631617+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:46.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:46 vm10 bash[23387]: audit 2026-03-09T21:15:45.631617+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:46.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:46 vm10 bash[23387]: audit 2026-03-09T21:15:45.640740+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:46.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:46 vm10 bash[23387]: audit 2026-03-09T21:15:45.640740+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:46.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:46 vm10 bash[23387]: audit 2026-03-09T21:15:45.641770+0000 mon.a (mon.0) 607 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:46.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:46 vm10 bash[23387]: audit 2026-03-09T21:15:45.641770+0000 mon.a (mon.0) 607 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:46.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:46 vm10 bash[23387]: audit 2026-03-09T21:15:45.642416+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:15:46.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:46 vm10 bash[23387]: audit 2026-03-09T21:15:45.642416+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:15:46.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:46 vm10 bash[23387]: audit 2026-03-09T21:15:45.663489+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:46.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:46 vm10 bash[23387]: audit 2026-03-09T21:15:45.663489+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:46.993 INFO:teuthology.orchestra.run.vm10.stdout:Created osd(s) 7 on host 'vm10' 2026-03-09T21:15:47.102 DEBUG:teuthology.orchestra.run.vm10:osd.7> sudo journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@osd.7.service 2026-03-09T21:15:47.103 INFO:tasks.cephadm:Waiting for 8 OSDs to come up... 2026-03-09T21:15:47.103 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph osd stat -f json 2026-03-09T21:15:47.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:47 vm10 bash[23387]: cluster 2026-03-09T21:15:43.982459+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:15:47.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:47 vm10 bash[23387]: cluster 2026-03-09T21:15:43.982459+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:15:47.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:47 vm10 bash[23387]: cluster 2026-03-09T21:15:43.982512+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:15:47.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:47 vm10 bash[23387]: cluster 2026-03-09T21:15:43.982512+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:15:47.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:47 vm10 bash[23387]: audit 2026-03-09T21:15:45.981307+0000 mon.a (mon.0) 610 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:47.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:47 vm10 bash[23387]: audit 2026-03-09T21:15:45.981307+0000 mon.a (mon.0) 610 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:47.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:47 vm10 bash[23387]: cluster 2026-03-09T21:15:46.045842+0000 mon.a (mon.0) 611 : cluster [DBG] osdmap e50: 8 total, 7 up, 8 in 2026-03-09T21:15:47.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:47 vm10 bash[23387]: cluster 2026-03-09T21:15:46.045842+0000 mon.a (mon.0) 611 : cluster [DBG] osdmap e50: 8 total, 7 up, 8 in 2026-03-09T21:15:47.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:47 vm10 bash[23387]: audit 2026-03-09T21:15:46.094253+0000 mon.a (mon.0) 612 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:47.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:47 vm10 bash[23387]: audit 2026-03-09T21:15:46.094253+0000 mon.a (mon.0) 612 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:47.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:47 vm10 bash[23387]: audit 2026-03-09T21:15:46.309355+0000 mon.a (mon.0) 613 : audit [INF] from='osd.7 ' entity='osd.7' 2026-03-09T21:15:47.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:47 vm10 bash[23387]: audit 2026-03-09T21:15:46.309355+0000 mon.a (mon.0) 613 : audit [INF] from='osd.7 ' entity='osd.7' 2026-03-09T21:15:47.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:47 vm10 bash[23387]: audit 2026-03-09T21:15:46.951397+0000 mon.a (mon.0) 614 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:47.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:47 vm10 bash[23387]: audit 2026-03-09T21:15:46.951397+0000 mon.a (mon.0) 614 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:47.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:47 vm10 bash[23387]: audit 2026-03-09T21:15:46.976367+0000 mon.a (mon.0) 615 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:15:47.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:47 vm10 bash[23387]: audit 2026-03-09T21:15:46.976367+0000 mon.a (mon.0) 615 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:15:47.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:47 vm10 bash[23387]: audit 2026-03-09T21:15:46.983944+0000 mon.a (mon.0) 616 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:47.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:47 vm10 bash[23387]: audit 2026-03-09T21:15:46.983944+0000 mon.a (mon.0) 616 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:47.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:47 vm10 bash[23387]: audit 2026-03-09T21:15:46.991601+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:47.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:47 vm10 bash[23387]: audit 2026-03-09T21:15:46.991601+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:47.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:47 vm07 bash[20771]: cluster 2026-03-09T21:15:43.982459+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:15:47.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:47 vm07 bash[20771]: cluster 2026-03-09T21:15:43.982459+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:15:47.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:47 vm07 bash[20771]: cluster 2026-03-09T21:15:43.982512+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:15:47.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:47 vm07 bash[20771]: cluster 2026-03-09T21:15:43.982512+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:15:47.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:47 vm07 bash[20771]: audit 2026-03-09T21:15:45.981307+0000 mon.a (mon.0) 610 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:47.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:47 vm07 bash[20771]: audit 2026-03-09T21:15:45.981307+0000 mon.a (mon.0) 610 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:47.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:47 vm07 bash[20771]: cluster 2026-03-09T21:15:46.045842+0000 mon.a (mon.0) 611 : cluster [DBG] osdmap e50: 8 total, 7 up, 8 in 2026-03-09T21:15:47.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:47 vm07 bash[20771]: cluster 2026-03-09T21:15:46.045842+0000 mon.a (mon.0) 611 : cluster [DBG] osdmap e50: 8 total, 7 up, 8 in 2026-03-09T21:15:47.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:47 vm07 bash[20771]: audit 2026-03-09T21:15:46.094253+0000 mon.a (mon.0) 612 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:47.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:47 vm07 bash[20771]: audit 2026-03-09T21:15:46.094253+0000 mon.a (mon.0) 612 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:47.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:47 vm07 bash[20771]: audit 2026-03-09T21:15:46.309355+0000 mon.a (mon.0) 613 : audit [INF] from='osd.7 ' entity='osd.7' 2026-03-09T21:15:47.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:47 vm07 bash[20771]: audit 2026-03-09T21:15:46.309355+0000 mon.a (mon.0) 613 : audit [INF] from='osd.7 ' entity='osd.7' 2026-03-09T21:15:47.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:47 vm07 bash[20771]: audit 2026-03-09T21:15:46.951397+0000 mon.a (mon.0) 614 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:47.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:47 vm07 bash[20771]: audit 2026-03-09T21:15:46.951397+0000 mon.a (mon.0) 614 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:47.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:47 vm07 bash[20771]: audit 2026-03-09T21:15:46.976367+0000 mon.a (mon.0) 615 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:15:47.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:47 vm07 bash[20771]: audit 2026-03-09T21:15:46.976367+0000 mon.a (mon.0) 615 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:15:47.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:47 vm07 bash[20771]: audit 2026-03-09T21:15:46.983944+0000 mon.a (mon.0) 616 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:47.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:47 vm07 bash[20771]: audit 2026-03-09T21:15:46.983944+0000 mon.a (mon.0) 616 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:47.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:47 vm07 bash[20771]: audit 2026-03-09T21:15:46.991601+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:47.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:47 vm07 bash[20771]: audit 2026-03-09T21:15:46.991601+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:47.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:47 vm07 bash[28052]: cluster 2026-03-09T21:15:43.982459+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:15:47.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:47 vm07 bash[28052]: cluster 2026-03-09T21:15:43.982459+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T21:15:47.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:47 vm07 bash[28052]: cluster 2026-03-09T21:15:43.982512+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:15:47.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:47 vm07 bash[28052]: cluster 2026-03-09T21:15:43.982512+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T21:15:47.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:47 vm07 bash[28052]: audit 2026-03-09T21:15:45.981307+0000 mon.a (mon.0) 610 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:47.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:47 vm07 bash[28052]: audit 2026-03-09T21:15:45.981307+0000 mon.a (mon.0) 610 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:47.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:47 vm07 bash[28052]: cluster 2026-03-09T21:15:46.045842+0000 mon.a (mon.0) 611 : cluster [DBG] osdmap e50: 8 total, 7 up, 8 in 2026-03-09T21:15:47.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:47 vm07 bash[28052]: cluster 2026-03-09T21:15:46.045842+0000 mon.a (mon.0) 611 : cluster [DBG] osdmap e50: 8 total, 7 up, 8 in 2026-03-09T21:15:47.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:47 vm07 bash[28052]: audit 2026-03-09T21:15:46.094253+0000 mon.a (mon.0) 612 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:47.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:47 vm07 bash[28052]: audit 2026-03-09T21:15:46.094253+0000 mon.a (mon.0) 612 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:47.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:47 vm07 bash[28052]: audit 2026-03-09T21:15:46.309355+0000 mon.a (mon.0) 613 : audit [INF] from='osd.7 ' entity='osd.7' 2026-03-09T21:15:47.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:47 vm07 bash[28052]: audit 2026-03-09T21:15:46.309355+0000 mon.a (mon.0) 613 : audit [INF] from='osd.7 ' entity='osd.7' 2026-03-09T21:15:47.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:47 vm07 bash[28052]: audit 2026-03-09T21:15:46.951397+0000 mon.a (mon.0) 614 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:47.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:47 vm07 bash[28052]: audit 2026-03-09T21:15:46.951397+0000 mon.a (mon.0) 614 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:47.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:47 vm07 bash[28052]: audit 2026-03-09T21:15:46.976367+0000 mon.a (mon.0) 615 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:15:47.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:47 vm07 bash[28052]: audit 2026-03-09T21:15:46.976367+0000 mon.a (mon.0) 615 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:15:47.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:47 vm07 bash[28052]: audit 2026-03-09T21:15:46.983944+0000 mon.a (mon.0) 616 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:47.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:47 vm07 bash[28052]: audit 2026-03-09T21:15:46.983944+0000 mon.a (mon.0) 616 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:47.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:47 vm07 bash[28052]: audit 2026-03-09T21:15:46.991601+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:47.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:47 vm07 bash[28052]: audit 2026-03-09T21:15:46.991601+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:48.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:48 vm10 bash[23387]: cluster 2026-03-09T21:15:46.626252+0000 mgr.y (mgr.14150) 227 : cluster [DBG] pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:48.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:48 vm10 bash[23387]: cluster 2026-03-09T21:15:46.626252+0000 mgr.y (mgr.14150) 227 : cluster [DBG] pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:48.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:48 vm10 bash[23387]: cluster 2026-03-09T21:15:47.124398+0000 mon.a (mon.0) 618 : cluster [INF] osd.7 v2:192.168.123.110:6812/2049527874 boot 2026-03-09T21:15:48.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:48 vm10 bash[23387]: cluster 2026-03-09T21:15:47.124398+0000 mon.a (mon.0) 618 : cluster [INF] osd.7 v2:192.168.123.110:6812/2049527874 boot 2026-03-09T21:15:48.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:48 vm10 bash[23387]: cluster 2026-03-09T21:15:47.124558+0000 mon.a (mon.0) 619 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-09T21:15:48.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:48 vm10 bash[23387]: cluster 2026-03-09T21:15:47.124558+0000 mon.a (mon.0) 619 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-09T21:15:48.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:48 vm10 bash[23387]: audit 2026-03-09T21:15:47.129308+0000 mon.a (mon.0) 620 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:48.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:48 vm10 bash[23387]: audit 2026-03-09T21:15:47.129308+0000 mon.a (mon.0) 620 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:48.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:48 vm07 bash[20771]: cluster 2026-03-09T21:15:46.626252+0000 mgr.y (mgr.14150) 227 : cluster [DBG] pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:48.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:48 vm07 bash[20771]: cluster 2026-03-09T21:15:46.626252+0000 mgr.y (mgr.14150) 227 : cluster [DBG] pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:48.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:48 vm07 bash[20771]: cluster 2026-03-09T21:15:47.124398+0000 mon.a (mon.0) 618 : cluster [INF] osd.7 v2:192.168.123.110:6812/2049527874 boot 2026-03-09T21:15:48.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:48 vm07 bash[20771]: cluster 2026-03-09T21:15:47.124398+0000 mon.a (mon.0) 618 : cluster [INF] osd.7 v2:192.168.123.110:6812/2049527874 boot 2026-03-09T21:15:48.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:48 vm07 bash[20771]: cluster 2026-03-09T21:15:47.124558+0000 mon.a (mon.0) 619 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-09T21:15:48.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:48 vm07 bash[20771]: cluster 2026-03-09T21:15:47.124558+0000 mon.a (mon.0) 619 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-09T21:15:48.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:48 vm07 bash[20771]: audit 2026-03-09T21:15:47.129308+0000 mon.a (mon.0) 620 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:48.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:48 vm07 bash[20771]: audit 2026-03-09T21:15:47.129308+0000 mon.a (mon.0) 620 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:48.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:48 vm07 bash[28052]: cluster 2026-03-09T21:15:46.626252+0000 mgr.y (mgr.14150) 227 : cluster [DBG] pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:48.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:48 vm07 bash[28052]: cluster 2026-03-09T21:15:46.626252+0000 mgr.y (mgr.14150) 227 : cluster [DBG] pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T21:15:48.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:48 vm07 bash[28052]: cluster 2026-03-09T21:15:47.124398+0000 mon.a (mon.0) 618 : cluster [INF] osd.7 v2:192.168.123.110:6812/2049527874 boot 2026-03-09T21:15:48.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:48 vm07 bash[28052]: cluster 2026-03-09T21:15:47.124398+0000 mon.a (mon.0) 618 : cluster [INF] osd.7 v2:192.168.123.110:6812/2049527874 boot 2026-03-09T21:15:48.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:48 vm07 bash[28052]: cluster 2026-03-09T21:15:47.124558+0000 mon.a (mon.0) 619 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-09T21:15:48.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:48 vm07 bash[28052]: cluster 2026-03-09T21:15:47.124558+0000 mon.a (mon.0) 619 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-09T21:15:48.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:48 vm07 bash[28052]: audit 2026-03-09T21:15:47.129308+0000 mon.a (mon.0) 620 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:48.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:48 vm07 bash[28052]: audit 2026-03-09T21:15:47.129308+0000 mon.a (mon.0) 620 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:15:49.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:49 vm10 bash[23387]: cluster 2026-03-09T21:15:48.140348+0000 mon.a (mon.0) 621 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-09T21:15:49.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:49 vm10 bash[23387]: cluster 2026-03-09T21:15:48.140348+0000 mon.a (mon.0) 621 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-09T21:15:49.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:49 vm07 bash[20771]: cluster 2026-03-09T21:15:48.140348+0000 mon.a (mon.0) 621 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-09T21:15:49.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:49 vm07 bash[20771]: cluster 2026-03-09T21:15:48.140348+0000 mon.a (mon.0) 621 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-09T21:15:49.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:49 vm07 bash[28052]: cluster 2026-03-09T21:15:48.140348+0000 mon.a (mon.0) 621 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-09T21:15:49.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:49 vm07 bash[28052]: cluster 2026-03-09T21:15:48.140348+0000 mon.a (mon.0) 621 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-09T21:15:50.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:50 vm10 bash[23387]: cluster 2026-03-09T21:15:48.626563+0000 mgr.y (mgr.14150) 228 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:15:50.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:50 vm10 bash[23387]: cluster 2026-03-09T21:15:48.626563+0000 mgr.y (mgr.14150) 228 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:15:50.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:50 vm10 bash[23387]: cluster 2026-03-09T21:15:49.164336+0000 mon.a (mon.0) 622 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-09T21:15:50.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:50 vm10 bash[23387]: cluster 2026-03-09T21:15:49.164336+0000 mon.a (mon.0) 622 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-09T21:15:50.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:50 vm07 bash[20771]: cluster 2026-03-09T21:15:48.626563+0000 mgr.y (mgr.14150) 228 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:15:50.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:50 vm07 bash[20771]: cluster 2026-03-09T21:15:48.626563+0000 mgr.y (mgr.14150) 228 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:15:50.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:50 vm07 bash[20771]: cluster 2026-03-09T21:15:49.164336+0000 mon.a (mon.0) 622 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-09T21:15:50.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:50 vm07 bash[20771]: cluster 2026-03-09T21:15:49.164336+0000 mon.a (mon.0) 622 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-09T21:15:50.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:50 vm07 bash[28052]: cluster 2026-03-09T21:15:48.626563+0000 mgr.y (mgr.14150) 228 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:15:50.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:50 vm07 bash[28052]: cluster 2026-03-09T21:15:48.626563+0000 mgr.y (mgr.14150) 228 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:15:50.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:50 vm07 bash[28052]: cluster 2026-03-09T21:15:49.164336+0000 mon.a (mon.0) 622 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-09T21:15:50.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:50 vm07 bash[28052]: cluster 2026-03-09T21:15:49.164336+0000 mon.a (mon.0) 622 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-09T21:15:51.779 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:15:52.070 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:15:52.135 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":53,"num_osds":8,"num_up_osds":8,"osd_up_since":1773090947,"num_in_osds":8,"osd_in_since":1773090928,"num_remapped_pgs":0} 2026-03-09T21:15:52.135 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph osd dump --format=json 2026-03-09T21:15:52.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:52 vm07 bash[28052]: cluster 2026-03-09T21:15:50.626928+0000 mgr.y (mgr.14150) 229 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:15:52.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:52 vm07 bash[28052]: cluster 2026-03-09T21:15:50.626928+0000 mgr.y (mgr.14150) 229 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:15:52.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:52 vm07 bash[28052]: audit 2026-03-09T21:15:52.070602+0000 mon.a (mon.0) 623 : audit [DBG] from='client.? 192.168.123.107:0/806580912' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T21:15:52.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:52 vm07 bash[28052]: audit 2026-03-09T21:15:52.070602+0000 mon.a (mon.0) 623 : audit [DBG] from='client.? 192.168.123.107:0/806580912' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T21:15:52.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:52 vm07 bash[20771]: cluster 2026-03-09T21:15:50.626928+0000 mgr.y (mgr.14150) 229 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:15:52.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:52 vm07 bash[20771]: cluster 2026-03-09T21:15:50.626928+0000 mgr.y (mgr.14150) 229 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:15:52.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:52 vm07 bash[20771]: audit 2026-03-09T21:15:52.070602+0000 mon.a (mon.0) 623 : audit [DBG] from='client.? 192.168.123.107:0/806580912' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T21:15:52.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:52 vm07 bash[20771]: audit 2026-03-09T21:15:52.070602+0000 mon.a (mon.0) 623 : audit [DBG] from='client.? 192.168.123.107:0/806580912' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T21:15:52.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:52 vm10 bash[23387]: cluster 2026-03-09T21:15:50.626928+0000 mgr.y (mgr.14150) 229 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:15:52.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:52 vm10 bash[23387]: cluster 2026-03-09T21:15:50.626928+0000 mgr.y (mgr.14150) 229 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:15:52.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:52 vm10 bash[23387]: audit 2026-03-09T21:15:52.070602+0000 mon.a (mon.0) 623 : audit [DBG] from='client.? 192.168.123.107:0/806580912' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T21:15:52.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:52 vm10 bash[23387]: audit 2026-03-09T21:15:52.070602+0000 mon.a (mon.0) 623 : audit [DBG] from='client.? 192.168.123.107:0/806580912' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T21:15:54.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:53 vm10 bash[23387]: cluster 2026-03-09T21:15:52.627373+0000 mgr.y (mgr.14150) 230 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:15:54.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:53 vm10 bash[23387]: cluster 2026-03-09T21:15:52.627373+0000 mgr.y (mgr.14150) 230 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:15:54.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:53 vm10 bash[23387]: cephadm 2026-03-09T21:15:52.906925+0000 mgr.y (mgr.14150) 231 : cephadm [INF] Detected new or changed devices on vm10 2026-03-09T21:15:54.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:53 vm10 bash[23387]: cephadm 2026-03-09T21:15:52.906925+0000 mgr.y (mgr.14150) 231 : cephadm [INF] Detected new or changed devices on vm10 2026-03-09T21:15:54.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:53 vm10 bash[23387]: audit 2026-03-09T21:15:52.912927+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:54.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:53 vm10 bash[23387]: audit 2026-03-09T21:15:52.912927+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:54.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:53 vm10 bash[23387]: audit 2026-03-09T21:15:52.920814+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:54.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:53 vm10 bash[23387]: audit 2026-03-09T21:15:52.920814+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:54.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:53 vm10 bash[23387]: audit 2026-03-09T21:15:52.922114+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:54.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:53 vm10 bash[23387]: audit 2026-03-09T21:15:52.922114+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:54.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:53 vm10 bash[23387]: audit 2026-03-09T21:15:52.924541+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:54.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:53 vm10 bash[23387]: audit 2026-03-09T21:15:52.924541+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:54.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:53 vm10 bash[23387]: audit 2026-03-09T21:15:52.925411+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:54.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:53 vm10 bash[23387]: audit 2026-03-09T21:15:52.925411+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:54.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:53 vm10 bash[23387]: audit 2026-03-09T21:15:52.926276+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:54.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:53 vm10 bash[23387]: audit 2026-03-09T21:15:52.926276+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:54.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:53 vm10 bash[23387]: cephadm 2026-03-09T21:15:52.926923+0000 mgr.y (mgr.14150) 232 : cephadm [INF] Adjusting osd_memory_target on vm10 to 113.9M 2026-03-09T21:15:54.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:53 vm10 bash[23387]: cephadm 2026-03-09T21:15:52.926923+0000 mgr.y (mgr.14150) 232 : cephadm [INF] Adjusting osd_memory_target on vm10 to 113.9M 2026-03-09T21:15:54.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:53 vm10 bash[23387]: cephadm 2026-03-09T21:15:52.927598+0000 mgr.y (mgr.14150) 233 : cephadm [WRN] Unable to set osd_memory_target on vm10 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-09T21:15:54.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:53 vm10 bash[23387]: cephadm 2026-03-09T21:15:52.927598+0000 mgr.y (mgr.14150) 233 : cephadm [WRN] Unable to set osd_memory_target on vm10 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-09T21:15:54.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:53 vm10 bash[23387]: audit 2026-03-09T21:15:52.927999+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:54.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:53 vm10 bash[23387]: audit 2026-03-09T21:15:52.927999+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:54.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:53 vm10 bash[23387]: audit 2026-03-09T21:15:52.928713+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:15:54.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:53 vm10 bash[23387]: audit 2026-03-09T21:15:52.928713+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:15:54.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:53 vm10 bash[23387]: audit 2026-03-09T21:15:52.933334+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:54.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:53 vm10 bash[23387]: audit 2026-03-09T21:15:52.933334+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:54.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:53 vm07 bash[20771]: cluster 2026-03-09T21:15:52.627373+0000 mgr.y (mgr.14150) 230 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:15:54.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:53 vm07 bash[20771]: cluster 2026-03-09T21:15:52.627373+0000 mgr.y (mgr.14150) 230 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:15:54.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:53 vm07 bash[20771]: cephadm 2026-03-09T21:15:52.906925+0000 mgr.y (mgr.14150) 231 : cephadm [INF] Detected new or changed devices on vm10 2026-03-09T21:15:54.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:53 vm07 bash[20771]: cephadm 2026-03-09T21:15:52.906925+0000 mgr.y (mgr.14150) 231 : cephadm [INF] Detected new or changed devices on vm10 2026-03-09T21:15:54.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:53 vm07 bash[20771]: audit 2026-03-09T21:15:52.912927+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:54.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:53 vm07 bash[20771]: audit 2026-03-09T21:15:52.912927+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:54.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:53 vm07 bash[20771]: audit 2026-03-09T21:15:52.920814+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:54.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:53 vm07 bash[20771]: audit 2026-03-09T21:15:52.920814+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:54.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:53 vm07 bash[20771]: audit 2026-03-09T21:15:52.922114+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:54.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:53 vm07 bash[20771]: audit 2026-03-09T21:15:52.922114+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:54.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:53 vm07 bash[20771]: audit 2026-03-09T21:15:52.924541+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:54.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:53 vm07 bash[20771]: audit 2026-03-09T21:15:52.924541+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:54.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:53 vm07 bash[20771]: audit 2026-03-09T21:15:52.925411+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:54.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:53 vm07 bash[20771]: audit 2026-03-09T21:15:52.925411+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:54.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:53 vm07 bash[20771]: audit 2026-03-09T21:15:52.926276+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:54.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:53 vm07 bash[20771]: audit 2026-03-09T21:15:52.926276+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:54.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:53 vm07 bash[20771]: cephadm 2026-03-09T21:15:52.926923+0000 mgr.y (mgr.14150) 232 : cephadm [INF] Adjusting osd_memory_target on vm10 to 113.9M 2026-03-09T21:15:54.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:53 vm07 bash[20771]: cephadm 2026-03-09T21:15:52.926923+0000 mgr.y (mgr.14150) 232 : cephadm [INF] Adjusting osd_memory_target on vm10 to 113.9M 2026-03-09T21:15:54.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:53 vm07 bash[20771]: cephadm 2026-03-09T21:15:52.927598+0000 mgr.y (mgr.14150) 233 : cephadm [WRN] Unable to set osd_memory_target on vm10 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-09T21:15:54.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:53 vm07 bash[20771]: cephadm 2026-03-09T21:15:52.927598+0000 mgr.y (mgr.14150) 233 : cephadm [WRN] Unable to set osd_memory_target on vm10 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-09T21:15:54.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:53 vm07 bash[20771]: audit 2026-03-09T21:15:52.927999+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:54.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:53 vm07 bash[20771]: audit 2026-03-09T21:15:52.927999+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:54.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:53 vm07 bash[20771]: audit 2026-03-09T21:15:52.928713+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:15:54.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:53 vm07 bash[20771]: audit 2026-03-09T21:15:52.928713+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:15:54.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:53 vm07 bash[20771]: audit 2026-03-09T21:15:52.933334+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:54.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:53 vm07 bash[20771]: audit 2026-03-09T21:15:52.933334+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:54.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:53 vm07 bash[28052]: cluster 2026-03-09T21:15:52.627373+0000 mgr.y (mgr.14150) 230 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:15:54.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:53 vm07 bash[28052]: cluster 2026-03-09T21:15:52.627373+0000 mgr.y (mgr.14150) 230 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:15:54.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:53 vm07 bash[28052]: cephadm 2026-03-09T21:15:52.906925+0000 mgr.y (mgr.14150) 231 : cephadm [INF] Detected new or changed devices on vm10 2026-03-09T21:15:54.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:53 vm07 bash[28052]: cephadm 2026-03-09T21:15:52.906925+0000 mgr.y (mgr.14150) 231 : cephadm [INF] Detected new or changed devices on vm10 2026-03-09T21:15:54.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:53 vm07 bash[28052]: audit 2026-03-09T21:15:52.912927+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:54.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:53 vm07 bash[28052]: audit 2026-03-09T21:15:52.912927+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:54.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:53 vm07 bash[28052]: audit 2026-03-09T21:15:52.920814+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:54.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:53 vm07 bash[28052]: audit 2026-03-09T21:15:52.920814+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:54.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:53 vm07 bash[28052]: audit 2026-03-09T21:15:52.922114+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:54.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:53 vm07 bash[28052]: audit 2026-03-09T21:15:52.922114+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:54.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:53 vm07 bash[28052]: audit 2026-03-09T21:15:52.924541+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:54.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:53 vm07 bash[28052]: audit 2026-03-09T21:15:52.924541+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:54.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:53 vm07 bash[28052]: audit 2026-03-09T21:15:52.925411+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:54.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:53 vm07 bash[28052]: audit 2026-03-09T21:15:52.925411+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:54.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:53 vm07 bash[28052]: audit 2026-03-09T21:15:52.926276+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:54.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:53 vm07 bash[28052]: audit 2026-03-09T21:15:52.926276+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:15:54.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:53 vm07 bash[28052]: cephadm 2026-03-09T21:15:52.926923+0000 mgr.y (mgr.14150) 232 : cephadm [INF] Adjusting osd_memory_target on vm10 to 113.9M 2026-03-09T21:15:54.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:53 vm07 bash[28052]: cephadm 2026-03-09T21:15:52.926923+0000 mgr.y (mgr.14150) 232 : cephadm [INF] Adjusting osd_memory_target on vm10 to 113.9M 2026-03-09T21:15:54.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:53 vm07 bash[28052]: cephadm 2026-03-09T21:15:52.927598+0000 mgr.y (mgr.14150) 233 : cephadm [WRN] Unable to set osd_memory_target on vm10 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-09T21:15:54.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:53 vm07 bash[28052]: cephadm 2026-03-09T21:15:52.927598+0000 mgr.y (mgr.14150) 233 : cephadm [WRN] Unable to set osd_memory_target on vm10 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-09T21:15:54.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:53 vm07 bash[28052]: audit 2026-03-09T21:15:52.927999+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:54.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:53 vm07 bash[28052]: audit 2026-03-09T21:15:52.927999+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:15:54.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:53 vm07 bash[28052]: audit 2026-03-09T21:15:52.928713+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:15:54.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:53 vm07 bash[28052]: audit 2026-03-09T21:15:52.928713+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:15:54.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:53 vm07 bash[28052]: audit 2026-03-09T21:15:52.933334+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:54.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:53 vm07 bash[28052]: audit 2026-03-09T21:15:52.933334+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:15:55.804 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:15:56.093 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:15:56.093 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":53,"fsid":"22c897f4-1bfc-11f1-adaa-13127443f8b3","created":"2026-03-09T21:09:52.807363+0000","modified":"2026-03-09T21:15:49.146281+0000","last_up_change":"2026-03-09T21:15:47.106059+0000","last_in_change":"2026-03-09T21:15:28.039101+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T21:12:50.641417+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"5ef293c2-89b5-4f27-a447-e0750ac5c165","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":51,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6801","nonce":2141296969}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6802","nonce":2141296969}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6804","nonce":2141296969}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6803","nonce":2141296969}]},"public_addr":"192.168.123.107:6801/2141296969","cluster_addr":"192.168.123.107:6802/2141296969","heartbeat_back_addr":"192.168.123.107:6804/2141296969","heartbeat_front_addr":"192.168.123.107:6803/2141296969","state":["exists","up"]},{"osd":1,"uuid":"98ca1795-9ed4-4ffb-8a3f-f26e615f554f","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":31,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6805","nonce":4103893323}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6806","nonce":4103893323}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6808","nonce":4103893323}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6807","nonce":4103893323}]},"public_addr":"192.168.123.107:6805/4103893323","cluster_addr":"192.168.123.107:6806/4103893323","heartbeat_back_addr":"192.168.123.107:6808/4103893323","heartbeat_front_addr":"192.168.123.107:6807/4103893323","state":["exists","up"]},{"osd":2,"uuid":"4a040af0-0bb5-4407-ba5f-64091d0e0685","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6809","nonce":2553486713}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6810","nonce":2553486713}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6812","nonce":2553486713}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6811","nonce":2553486713}]},"public_addr":"192.168.123.107:6809/2553486713","cluster_addr":"192.168.123.107:6810/2553486713","heartbeat_back_addr":"192.168.123.107:6812/2553486713","heartbeat_front_addr":"192.168.123.107:6811/2553486713","state":["exists","up"]},{"osd":3,"uuid":"82b53895-a55e-4a96-84b2-f1efa2657688","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":25,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6813","nonce":1113345127}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6814","nonce":1113345127}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6816","nonce":1113345127}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6815","nonce":1113345127}]},"public_addr":"192.168.123.107:6813/1113345127","cluster_addr":"192.168.123.107:6814/1113345127","heartbeat_back_addr":"192.168.123.107:6816/1113345127","heartbeat_front_addr":"192.168.123.107:6815/1113345127","state":["exists","up"]},{"osd":4,"uuid":"1a2754cd-a1ed-4d39-ac11-6dd3a5aa4512","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":30,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6800","nonce":4164782911}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6801","nonce":4164782911}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6803","nonce":4164782911}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6802","nonce":4164782911}]},"public_addr":"192.168.123.110:6800/4164782911","cluster_addr":"192.168.123.110:6801/4164782911","heartbeat_back_addr":"192.168.123.110:6803/4164782911","heartbeat_front_addr":"192.168.123.110:6802/4164782911","state":["exists","up"]},{"osd":5,"uuid":"94d2c197-ad39-4db0-9389-4183a78f1d0a","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":36,"up_thru":37,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6804","nonce":1216077544}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6805","nonce":1216077544}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6807","nonce":1216077544}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6806","nonce":1216077544}]},"public_addr":"192.168.123.110:6804/1216077544","cluster_addr":"192.168.123.110:6805/1216077544","heartbeat_back_addr":"192.168.123.110:6807/1216077544","heartbeat_front_addr":"192.168.123.110:6806/1216077544","state":["exists","up"]},{"osd":6,"uuid":"b9ca0fe4-bec8-42a3-9f19-f8c556e71c46","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":44,"up_thru":45,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6808","nonce":646422706}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6809","nonce":646422706}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6811","nonce":646422706}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6810","nonce":646422706}]},"public_addr":"192.168.123.110:6808/646422706","cluster_addr":"192.168.123.110:6809/646422706","heartbeat_back_addr":"192.168.123.110:6811/646422706","heartbeat_front_addr":"192.168.123.110:6810/646422706","state":["exists","up"]},{"osd":7,"uuid":"1f0752e8-2e42-4ee3-ac34-768b5409242e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":51,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6812","nonce":2049527874}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6813","nonce":2049527874}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6815","nonce":2049527874}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6814","nonce":2049527874}]},"public_addr":"192.168.123.110:6812/2049527874","cluster_addr":"192.168.123.110:6813/2049527874","heartbeat_back_addr":"192.168.123.110:6815/2049527874","heartbeat_front_addr":"192.168.123.110:6814/2049527874","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T21:11:38.098561+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T21:12:13.068960+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T21:12:47.193524+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T21:13:22.119872+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T21:13:56.136897+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T21:14:32.010601+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T21:15:08.102734+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T21:15:43.982513+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.107:0/4284336732":"2026-03-10T21:10:14.562158+0000","192.168.123.107:0/1110217514":"2026-03-10T21:10:14.562158+0000","192.168.123.107:0/761815837":"2026-03-10T21:10:14.562158+0000","192.168.123.107:6800/2970840566":"2026-03-10T21:10:04.153229+0000","192.168.123.107:0/985741243":"2026-03-10T21:10:04.153229+0000","192.168.123.107:0/2131061409":"2026-03-10T21:10:04.153229+0000","192.168.123.107:6800/1914116107":"2026-03-10T21:10:14.562158+0000","192.168.123.107:0/1185689761":"2026-03-10T21:10:04.153229+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T21:15:56.107 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:55 vm07 bash[20771]: cluster 2026-03-09T21:15:54.627732+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:15:56.107 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:55 vm07 bash[20771]: cluster 2026-03-09T21:15:54.627732+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:15:56.108 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:55 vm07 bash[28052]: cluster 2026-03-09T21:15:54.627732+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:15:56.108 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:55 vm07 bash[28052]: cluster 2026-03-09T21:15:54.627732+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:15:56.181 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-09T21:12:50.641417+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'is_stretch_pool': False, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '21', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_type': 'Fair distribution', 'score_acting': 7.889999866485596, 'score_stable': 7.889999866485596, 'optimal_score': 0.3799999952316284, 'raw_score_acting': 3, 'raw_score_stable': 3, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}] 2026-03-09T21:15:56.181 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph osd pool get .mgr pg_num 2026-03-09T21:15:56.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:55 vm10 bash[23387]: cluster 2026-03-09T21:15:54.627732+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:15:56.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:55 vm10 bash[23387]: cluster 2026-03-09T21:15:54.627732+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:15:57.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:56 vm10 bash[23387]: audit 2026-03-09T21:15:56.092682+0000 mon.a (mon.0) 633 : audit [DBG] from='client.? 192.168.123.107:0/829719696' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T21:15:57.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:56 vm10 bash[23387]: audit 2026-03-09T21:15:56.092682+0000 mon.a (mon.0) 633 : audit [DBG] from='client.? 192.168.123.107:0/829719696' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T21:15:57.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:56 vm07 bash[20771]: audit 2026-03-09T21:15:56.092682+0000 mon.a (mon.0) 633 : audit [DBG] from='client.? 192.168.123.107:0/829719696' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T21:15:57.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:56 vm07 bash[20771]: audit 2026-03-09T21:15:56.092682+0000 mon.a (mon.0) 633 : audit [DBG] from='client.? 192.168.123.107:0/829719696' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T21:15:57.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:56 vm07 bash[28052]: audit 2026-03-09T21:15:56.092682+0000 mon.a (mon.0) 633 : audit [DBG] from='client.? 192.168.123.107:0/829719696' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T21:15:57.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:56 vm07 bash[28052]: audit 2026-03-09T21:15:56.092682+0000 mon.a (mon.0) 633 : audit [DBG] from='client.? 192.168.123.107:0/829719696' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T21:15:58.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:57 vm10 bash[23387]: cluster 2026-03-09T21:15:56.628052+0000 mgr.y (mgr.14150) 235 : cluster [DBG] pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:15:58.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:57 vm10 bash[23387]: cluster 2026-03-09T21:15:56.628052+0000 mgr.y (mgr.14150) 235 : cluster [DBG] pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:15:58.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:57 vm07 bash[20771]: cluster 2026-03-09T21:15:56.628052+0000 mgr.y (mgr.14150) 235 : cluster [DBG] pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:15:58.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:57 vm07 bash[20771]: cluster 2026-03-09T21:15:56.628052+0000 mgr.y (mgr.14150) 235 : cluster [DBG] pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:15:58.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:57 vm07 bash[28052]: cluster 2026-03-09T21:15:56.628052+0000 mgr.y (mgr.14150) 235 : cluster [DBG] pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:15:58.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:57 vm07 bash[28052]: cluster 2026-03-09T21:15:56.628052+0000 mgr.y (mgr.14150) 235 : cluster [DBG] pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:00.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:59 vm07 bash[20771]: cluster 2026-03-09T21:15:58.628422+0000 mgr.y (mgr.14150) 236 : cluster [DBG] pgmap v215: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:00.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:15:59 vm07 bash[20771]: cluster 2026-03-09T21:15:58.628422+0000 mgr.y (mgr.14150) 236 : cluster [DBG] pgmap v215: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:00.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:59 vm07 bash[28052]: cluster 2026-03-09T21:15:58.628422+0000 mgr.y (mgr.14150) 236 : cluster [DBG] pgmap v215: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:00.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:15:59 vm07 bash[28052]: cluster 2026-03-09T21:15:58.628422+0000 mgr.y (mgr.14150) 236 : cluster [DBG] pgmap v215: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:00.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:59 vm10 bash[23387]: cluster 2026-03-09T21:15:58.628422+0000 mgr.y (mgr.14150) 236 : cluster [DBG] pgmap v215: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:00.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:15:59 vm10 bash[23387]: cluster 2026-03-09T21:15:58.628422+0000 mgr.y (mgr.14150) 236 : cluster [DBG] pgmap v215: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:00.841 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:16:01.220 INFO:teuthology.orchestra.run.vm07.stdout:pg_num: 1 2026-03-09T21:16:01.289 INFO:tasks.cephadm:Adding ceph.rgw.foo.a on vm07 2026-03-09T21:16:01.289 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph orch apply rgw foo.a --placement '1;vm07=foo.a' 2026-03-09T21:16:02.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:01 vm07 bash[20771]: cluster 2026-03-09T21:16:00.628796+0000 mgr.y (mgr.14150) 237 : cluster [DBG] pgmap v216: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:02.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:01 vm07 bash[20771]: cluster 2026-03-09T21:16:00.628796+0000 mgr.y (mgr.14150) 237 : cluster [DBG] pgmap v216: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:02.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:01 vm07 bash[20771]: audit 2026-03-09T21:16:01.220468+0000 mon.a (mon.0) 634 : audit [DBG] from='client.? 192.168.123.107:0/1957436937' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T21:16:02.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:01 vm07 bash[20771]: audit 2026-03-09T21:16:01.220468+0000 mon.a (mon.0) 634 : audit [DBG] from='client.? 192.168.123.107:0/1957436937' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T21:16:02.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:01 vm07 bash[28052]: cluster 2026-03-09T21:16:00.628796+0000 mgr.y (mgr.14150) 237 : cluster [DBG] pgmap v216: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:02.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:01 vm07 bash[28052]: cluster 2026-03-09T21:16:00.628796+0000 mgr.y (mgr.14150) 237 : cluster [DBG] pgmap v216: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:02.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:01 vm07 bash[28052]: audit 2026-03-09T21:16:01.220468+0000 mon.a (mon.0) 634 : audit [DBG] from='client.? 192.168.123.107:0/1957436937' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T21:16:02.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:01 vm07 bash[28052]: audit 2026-03-09T21:16:01.220468+0000 mon.a (mon.0) 634 : audit [DBG] from='client.? 192.168.123.107:0/1957436937' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T21:16:02.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:01 vm10 bash[23387]: cluster 2026-03-09T21:16:00.628796+0000 mgr.y (mgr.14150) 237 : cluster [DBG] pgmap v216: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:02.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:01 vm10 bash[23387]: cluster 2026-03-09T21:16:00.628796+0000 mgr.y (mgr.14150) 237 : cluster [DBG] pgmap v216: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:02.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:01 vm10 bash[23387]: audit 2026-03-09T21:16:01.220468+0000 mon.a (mon.0) 634 : audit [DBG] from='client.? 192.168.123.107:0/1957436937' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T21:16:02.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:01 vm10 bash[23387]: audit 2026-03-09T21:16:01.220468+0000 mon.a (mon.0) 634 : audit [DBG] from='client.? 192.168.123.107:0/1957436937' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T21:16:04.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:03 vm07 bash[20771]: cluster 2026-03-09T21:16:02.629095+0000 mgr.y (mgr.14150) 238 : cluster [DBG] pgmap v217: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:04.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:03 vm07 bash[20771]: cluster 2026-03-09T21:16:02.629095+0000 mgr.y (mgr.14150) 238 : cluster [DBG] pgmap v217: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:04.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:03 vm07 bash[28052]: cluster 2026-03-09T21:16:02.629095+0000 mgr.y (mgr.14150) 238 : cluster [DBG] pgmap v217: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:04.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:03 vm07 bash[28052]: cluster 2026-03-09T21:16:02.629095+0000 mgr.y (mgr.14150) 238 : cluster [DBG] pgmap v217: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:04.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:03 vm10 bash[23387]: cluster 2026-03-09T21:16:02.629095+0000 mgr.y (mgr.14150) 238 : cluster [DBG] pgmap v217: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:04.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:03 vm10 bash[23387]: cluster 2026-03-09T21:16:02.629095+0000 mgr.y (mgr.14150) 238 : cluster [DBG] pgmap v217: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:05.935 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.b/config 2026-03-09T21:16:06.263 INFO:teuthology.orchestra.run.vm10.stdout:Scheduled rgw.foo.a update... 2026-03-09T21:16:06.275 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:05 vm10 bash[23387]: cluster 2026-03-09T21:16:04.629442+0000 mgr.y (mgr.14150) 239 : cluster [DBG] pgmap v218: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:06.275 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:05 vm10 bash[23387]: cluster 2026-03-09T21:16:04.629442+0000 mgr.y (mgr.14150) 239 : cluster [DBG] pgmap v218: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:06.324 DEBUG:teuthology.orchestra.run.vm07:rgw.foo.a> sudo journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@rgw.foo.a.service 2026-03-09T21:16:06.326 INFO:tasks.cephadm:Adding ceph.iscsi.iscsi.a on vm10 2026-03-09T21:16:06.326 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph osd pool create datapool 3 3 replicated 2026-03-09T21:16:06.332 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:05 vm07 bash[20771]: cluster 2026-03-09T21:16:04.629442+0000 mgr.y (mgr.14150) 239 : cluster [DBG] pgmap v218: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:06.332 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:05 vm07 bash[20771]: cluster 2026-03-09T21:16:04.629442+0000 mgr.y (mgr.14150) 239 : cluster [DBG] pgmap v218: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:06.332 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:05 vm07 bash[28052]: cluster 2026-03-09T21:16:04.629442+0000 mgr.y (mgr.14150) 239 : cluster [DBG] pgmap v218: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:06.332 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:05 vm07 bash[28052]: cluster 2026-03-09T21:16:04.629442+0000 mgr.y (mgr.14150) 239 : cluster [DBG] pgmap v218: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:07.558 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:07 vm07 bash[20771]: audit 2026-03-09T21:16:06.254883+0000 mgr.y (mgr.14150) 240 : audit [DBG] from='client.24287 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm07=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:16:07.558 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:07 vm07 bash[20771]: audit 2026-03-09T21:16:06.254883+0000 mgr.y (mgr.14150) 240 : audit [DBG] from='client.24287 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm07=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:16:07.558 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:07 vm07 bash[20771]: cephadm 2026-03-09T21:16:06.256433+0000 mgr.y (mgr.14150) 241 : cephadm [INF] Saving service rgw.foo.a spec with placement vm07=foo.a;count:1 2026-03-09T21:16:07.558 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:07 vm07 bash[20771]: cephadm 2026-03-09T21:16:06.256433+0000 mgr.y (mgr.14150) 241 : cephadm [INF] Saving service rgw.foo.a spec with placement vm07=foo.a;count:1 2026-03-09T21:16:07.558 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:07 vm07 bash[20771]: audit 2026-03-09T21:16:06.262133+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:07.558 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:07 vm07 bash[20771]: audit 2026-03-09T21:16:06.262133+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:07.558 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:07 vm07 bash[20771]: audit 2026-03-09T21:16:06.263336+0000 mon.a (mon.0) 636 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:07.558 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:07 vm07 bash[20771]: audit 2026-03-09T21:16:06.263336+0000 mon.a (mon.0) 636 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:07.558 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:07 vm07 bash[20771]: audit 2026-03-09T21:16:06.630762+0000 mon.a (mon.0) 637 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:07.558 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:07 vm07 bash[20771]: audit 2026-03-09T21:16:06.630762+0000 mon.a (mon.0) 637 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:07.558 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:07 vm07 bash[20771]: audit 2026-03-09T21:16:06.631515+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:07.558 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:07 vm07 bash[20771]: audit 2026-03-09T21:16:06.631515+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:07.558 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:07 vm07 bash[20771]: audit 2026-03-09T21:16:06.638184+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:07.558 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:07 vm07 bash[20771]: audit 2026-03-09T21:16:06.638184+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:07.558 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:07 vm07 bash[20771]: audit 2026-03-09T21:16:06.640356+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T21:16:07.558 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:07 vm07 bash[20771]: audit 2026-03-09T21:16:06.640356+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T21:16:07.558 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:07 vm07 bash[20771]: audit 2026-03-09T21:16:06.642711+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T21:16:07.558 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:07 vm07 bash[20771]: audit 2026-03-09T21:16:06.642711+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T21:16:07.558 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:07 vm07 bash[20771]: audit 2026-03-09T21:16:06.649786+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:07.558 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:07 vm07 bash[20771]: audit 2026-03-09T21:16:06.649786+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:07.558 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:07 vm07 bash[20771]: audit 2026-03-09T21:16:06.652507+0000 mon.a (mon.0) 643 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:07.558 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:07 vm07 bash[20771]: audit 2026-03-09T21:16:06.652507+0000 mon.a (mon.0) 643 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:07.558 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:07 vm07 bash[28052]: audit 2026-03-09T21:16:06.254883+0000 mgr.y (mgr.14150) 240 : audit [DBG] from='client.24287 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm07=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:16:07.558 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:07 vm07 bash[28052]: audit 2026-03-09T21:16:06.254883+0000 mgr.y (mgr.14150) 240 : audit [DBG] from='client.24287 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm07=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:16:07.558 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:07 vm07 bash[28052]: cephadm 2026-03-09T21:16:06.256433+0000 mgr.y (mgr.14150) 241 : cephadm [INF] Saving service rgw.foo.a spec with placement vm07=foo.a;count:1 2026-03-09T21:16:07.558 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:07 vm07 bash[28052]: cephadm 2026-03-09T21:16:06.256433+0000 mgr.y (mgr.14150) 241 : cephadm [INF] Saving service rgw.foo.a spec with placement vm07=foo.a;count:1 2026-03-09T21:16:07.558 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:07 vm07 bash[28052]: audit 2026-03-09T21:16:06.262133+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:07.559 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:07 vm07 bash[28052]: audit 2026-03-09T21:16:06.262133+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:07.559 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:07 vm07 bash[28052]: audit 2026-03-09T21:16:06.263336+0000 mon.a (mon.0) 636 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:07.559 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:07 vm07 bash[28052]: audit 2026-03-09T21:16:06.263336+0000 mon.a (mon.0) 636 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:07.559 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:07 vm07 bash[28052]: audit 2026-03-09T21:16:06.630762+0000 mon.a (mon.0) 637 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:07.559 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:07 vm07 bash[28052]: audit 2026-03-09T21:16:06.630762+0000 mon.a (mon.0) 637 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:07.559 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:07 vm07 bash[28052]: audit 2026-03-09T21:16:06.631515+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:07.559 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:07 vm07 bash[28052]: audit 2026-03-09T21:16:06.631515+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:07.559 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:07 vm07 bash[28052]: audit 2026-03-09T21:16:06.638184+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:07.559 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:07 vm07 bash[28052]: audit 2026-03-09T21:16:06.638184+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:07.559 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:07 vm07 bash[28052]: audit 2026-03-09T21:16:06.640356+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T21:16:07.559 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:07 vm07 bash[28052]: audit 2026-03-09T21:16:06.640356+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T21:16:07.559 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:07 vm07 bash[28052]: audit 2026-03-09T21:16:06.642711+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T21:16:07.559 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:07 vm07 bash[28052]: audit 2026-03-09T21:16:06.642711+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T21:16:07.559 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:07 vm07 bash[28052]: audit 2026-03-09T21:16:06.649786+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:07.559 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:07 vm07 bash[28052]: audit 2026-03-09T21:16:06.649786+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:07.559 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:07 vm07 bash[28052]: audit 2026-03-09T21:16:06.652507+0000 mon.a (mon.0) 643 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:07.559 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:07 vm07 bash[28052]: audit 2026-03-09T21:16:06.652507+0000 mon.a (mon.0) 643 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:07.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:07 vm10 bash[23387]: audit 2026-03-09T21:16:06.254883+0000 mgr.y (mgr.14150) 240 : audit [DBG] from='client.24287 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm07=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:16:07.706 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:07 vm10 bash[23387]: audit 2026-03-09T21:16:06.254883+0000 mgr.y (mgr.14150) 240 : audit [DBG] from='client.24287 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm07=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:16:07.706 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:07 vm10 bash[23387]: cephadm 2026-03-09T21:16:06.256433+0000 mgr.y (mgr.14150) 241 : cephadm [INF] Saving service rgw.foo.a spec with placement vm07=foo.a;count:1 2026-03-09T21:16:07.706 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:07 vm10 bash[23387]: cephadm 2026-03-09T21:16:06.256433+0000 mgr.y (mgr.14150) 241 : cephadm [INF] Saving service rgw.foo.a spec with placement vm07=foo.a;count:1 2026-03-09T21:16:07.706 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:07 vm10 bash[23387]: audit 2026-03-09T21:16:06.262133+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:07.706 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:07 vm10 bash[23387]: audit 2026-03-09T21:16:06.262133+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:07.706 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:07 vm10 bash[23387]: audit 2026-03-09T21:16:06.263336+0000 mon.a (mon.0) 636 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:07.706 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:07 vm10 bash[23387]: audit 2026-03-09T21:16:06.263336+0000 mon.a (mon.0) 636 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:07.706 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:07 vm10 bash[23387]: audit 2026-03-09T21:16:06.630762+0000 mon.a (mon.0) 637 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:07.706 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:07 vm10 bash[23387]: audit 2026-03-09T21:16:06.630762+0000 mon.a (mon.0) 637 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:07.706 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:07 vm10 bash[23387]: audit 2026-03-09T21:16:06.631515+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:07.706 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:07 vm10 bash[23387]: audit 2026-03-09T21:16:06.631515+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:07.706 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:07 vm10 bash[23387]: audit 2026-03-09T21:16:06.638184+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:07.706 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:07 vm10 bash[23387]: audit 2026-03-09T21:16:06.638184+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:07.706 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:07 vm10 bash[23387]: audit 2026-03-09T21:16:06.640356+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T21:16:07.706 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:07 vm10 bash[23387]: audit 2026-03-09T21:16:06.640356+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T21:16:07.706 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:07 vm10 bash[23387]: audit 2026-03-09T21:16:06.642711+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T21:16:07.707 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:07 vm10 bash[23387]: audit 2026-03-09T21:16:06.642711+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T21:16:07.707 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:07 vm10 bash[23387]: audit 2026-03-09T21:16:06.649786+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:07.707 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:07 vm10 bash[23387]: audit 2026-03-09T21:16:06.649786+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:07.707 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:07 vm10 bash[23387]: audit 2026-03-09T21:16:06.652507+0000 mon.a (mon.0) 643 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:07.707 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:07 vm10 bash[23387]: audit 2026-03-09T21:16:06.652507+0000 mon.a (mon.0) 643 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:07.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:07 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:07.865 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:07 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:07.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:07 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:07.866 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:16:07 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:07.866 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:16:07 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:07.866 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 21:16:07 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:07.866 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 21:16:07 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:08.162 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:16:07 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:08.162 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 21:16:07 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:08.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:07 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:08.162 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:07 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:08.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:07 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:08.163 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 21:16:07 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:08.163 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:16:07 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:08.163 INFO:journalctl@ceph.rgw.foo.a.vm07.stdout:Mar 09 21:16:08 vm07 systemd[1]: Started Ceph rgw.foo.a for 22c897f4-1bfc-11f1-adaa-13127443f8b3. 2026-03-09T21:16:08.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:08 vm07 bash[20771]: cluster 2026-03-09T21:16:06.629853+0000 mgr.y (mgr.14150) 242 : cluster [DBG] pgmap v219: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:08.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:08 vm07 bash[20771]: cluster 2026-03-09T21:16:06.629853+0000 mgr.y (mgr.14150) 242 : cluster [DBG] pgmap v219: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:08.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:08 vm07 bash[20771]: cephadm 2026-03-09T21:16:06.653155+0000 mgr.y (mgr.14150) 243 : cephadm [INF] Deploying daemon rgw.foo.a on vm07 2026-03-09T21:16:08.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:08 vm07 bash[20771]: cephadm 2026-03-09T21:16:06.653155+0000 mgr.y (mgr.14150) 243 : cephadm [INF] Deploying daemon rgw.foo.a on vm07 2026-03-09T21:16:08.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:08 vm07 bash[20771]: audit 2026-03-09T21:16:08.104013+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:08.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:08 vm07 bash[20771]: audit 2026-03-09T21:16:08.104013+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:08.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:08 vm07 bash[20771]: audit 2026-03-09T21:16:08.115441+0000 mon.a (mon.0) 645 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:08.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:08 vm07 bash[20771]: audit 2026-03-09T21:16:08.115441+0000 mon.a (mon.0) 645 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:08.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:08 vm07 bash[20771]: audit 2026-03-09T21:16:08.129822+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:08.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:08 vm07 bash[20771]: audit 2026-03-09T21:16:08.129822+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:08.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:08 vm07 bash[20771]: audit 2026-03-09T21:16:08.143169+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:08.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:08 vm07 bash[20771]: audit 2026-03-09T21:16:08.143169+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:08.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:08 vm07 bash[20771]: audit 2026-03-09T21:16:08.151109+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:08.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:08 vm07 bash[20771]: audit 2026-03-09T21:16:08.151109+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:08.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:08 vm07 bash[20771]: audit 2026-03-09T21:16:08.165455+0000 mon.a (mon.0) 649 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:08.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:08 vm07 bash[20771]: audit 2026-03-09T21:16:08.165455+0000 mon.a (mon.0) 649 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:08.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:08 vm07 bash[28052]: cluster 2026-03-09T21:16:06.629853+0000 mgr.y (mgr.14150) 242 : cluster [DBG] pgmap v219: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:08.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:08 vm07 bash[28052]: cluster 2026-03-09T21:16:06.629853+0000 mgr.y (mgr.14150) 242 : cluster [DBG] pgmap v219: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:08.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:08 vm07 bash[28052]: cephadm 2026-03-09T21:16:06.653155+0000 mgr.y (mgr.14150) 243 : cephadm [INF] Deploying daemon rgw.foo.a on vm07 2026-03-09T21:16:08.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:08 vm07 bash[28052]: cephadm 2026-03-09T21:16:06.653155+0000 mgr.y (mgr.14150) 243 : cephadm [INF] Deploying daemon rgw.foo.a on vm07 2026-03-09T21:16:08.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:08 vm07 bash[28052]: audit 2026-03-09T21:16:08.104013+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:08.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:08 vm07 bash[28052]: audit 2026-03-09T21:16:08.104013+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:08.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:08 vm07 bash[28052]: audit 2026-03-09T21:16:08.115441+0000 mon.a (mon.0) 645 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:08.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:08 vm07 bash[28052]: audit 2026-03-09T21:16:08.115441+0000 mon.a (mon.0) 645 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:08.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:08 vm07 bash[28052]: audit 2026-03-09T21:16:08.129822+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:08.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:08 vm07 bash[28052]: audit 2026-03-09T21:16:08.129822+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:08.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:08 vm07 bash[28052]: audit 2026-03-09T21:16:08.143169+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:08.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:08 vm07 bash[28052]: audit 2026-03-09T21:16:08.143169+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:08.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:08 vm07 bash[28052]: audit 2026-03-09T21:16:08.151109+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:08.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:08 vm07 bash[28052]: audit 2026-03-09T21:16:08.151109+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:08.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:08 vm07 bash[28052]: audit 2026-03-09T21:16:08.165455+0000 mon.a (mon.0) 649 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:08.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:08 vm07 bash[28052]: audit 2026-03-09T21:16:08.165455+0000 mon.a (mon.0) 649 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:08 vm10 bash[23387]: cluster 2026-03-09T21:16:06.629853+0000 mgr.y (mgr.14150) 242 : cluster [DBG] pgmap v219: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:08 vm10 bash[23387]: cluster 2026-03-09T21:16:06.629853+0000 mgr.y (mgr.14150) 242 : cluster [DBG] pgmap v219: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:08 vm10 bash[23387]: cephadm 2026-03-09T21:16:06.653155+0000 mgr.y (mgr.14150) 243 : cephadm [INF] Deploying daemon rgw.foo.a on vm07 2026-03-09T21:16:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:08 vm10 bash[23387]: cephadm 2026-03-09T21:16:06.653155+0000 mgr.y (mgr.14150) 243 : cephadm [INF] Deploying daemon rgw.foo.a on vm07 2026-03-09T21:16:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:08 vm10 bash[23387]: audit 2026-03-09T21:16:08.104013+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:08 vm10 bash[23387]: audit 2026-03-09T21:16:08.104013+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:08 vm10 bash[23387]: audit 2026-03-09T21:16:08.115441+0000 mon.a (mon.0) 645 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:08 vm10 bash[23387]: audit 2026-03-09T21:16:08.115441+0000 mon.a (mon.0) 645 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:08 vm10 bash[23387]: audit 2026-03-09T21:16:08.129822+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:08 vm10 bash[23387]: audit 2026-03-09T21:16:08.129822+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:08 vm10 bash[23387]: audit 2026-03-09T21:16:08.143169+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:08.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:08 vm10 bash[23387]: audit 2026-03-09T21:16:08.143169+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:08.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:08 vm10 bash[23387]: audit 2026-03-09T21:16:08.151109+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:08.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:08 vm10 bash[23387]: audit 2026-03-09T21:16:08.151109+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:08.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:08 vm10 bash[23387]: audit 2026-03-09T21:16:08.165455+0000 mon.a (mon.0) 649 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:08.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:08 vm10 bash[23387]: audit 2026-03-09T21:16:08.165455+0000 mon.a (mon.0) 649 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:09.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:09 vm07 bash[20771]: cephadm 2026-03-09T21:16:08.133945+0000 mgr.y (mgr.14150) 244 : cephadm [INF] Saving service rgw.foo.a spec with placement vm07=foo.a;count:1 2026-03-09T21:16:09.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:09 vm07 bash[20771]: cephadm 2026-03-09T21:16:08.133945+0000 mgr.y (mgr.14150) 244 : cephadm [INF] Saving service rgw.foo.a spec with placement vm07=foo.a;count:1 2026-03-09T21:16:09.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:09 vm07 bash[20771]: cluster 2026-03-09T21:16:09.167263+0000 mon.a (mon.0) 650 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-09T21:16:09.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:09 vm07 bash[20771]: cluster 2026-03-09T21:16:09.167263+0000 mon.a (mon.0) 650 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-09T21:16:09.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:09 vm07 bash[20771]: audit 2026-03-09T21:16:09.168144+0000 mon.b (mon.1) 26 : audit [INF] from='client.? 192.168.123.107:0/2355541501' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T21:16:09.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:09 vm07 bash[20771]: audit 2026-03-09T21:16:09.168144+0000 mon.b (mon.1) 26 : audit [INF] from='client.? 192.168.123.107:0/2355541501' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T21:16:09.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:09 vm07 bash[20771]: audit 2026-03-09T21:16:09.172416+0000 mon.a (mon.0) 651 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T21:16:09.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:09 vm07 bash[20771]: audit 2026-03-09T21:16:09.172416+0000 mon.a (mon.0) 651 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T21:16:09.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:09 vm07 bash[28052]: cephadm 2026-03-09T21:16:08.133945+0000 mgr.y (mgr.14150) 244 : cephadm [INF] Saving service rgw.foo.a spec with placement vm07=foo.a;count:1 2026-03-09T21:16:09.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:09 vm07 bash[28052]: cephadm 2026-03-09T21:16:08.133945+0000 mgr.y (mgr.14150) 244 : cephadm [INF] Saving service rgw.foo.a spec with placement vm07=foo.a;count:1 2026-03-09T21:16:09.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:09 vm07 bash[28052]: cluster 2026-03-09T21:16:09.167263+0000 mon.a (mon.0) 650 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-09T21:16:09.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:09 vm07 bash[28052]: cluster 2026-03-09T21:16:09.167263+0000 mon.a (mon.0) 650 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-09T21:16:09.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:09 vm07 bash[28052]: audit 2026-03-09T21:16:09.168144+0000 mon.b (mon.1) 26 : audit [INF] from='client.? 192.168.123.107:0/2355541501' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T21:16:09.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:09 vm07 bash[28052]: audit 2026-03-09T21:16:09.168144+0000 mon.b (mon.1) 26 : audit [INF] from='client.? 192.168.123.107:0/2355541501' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T21:16:09.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:09 vm07 bash[28052]: audit 2026-03-09T21:16:09.172416+0000 mon.a (mon.0) 651 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T21:16:09.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:09 vm07 bash[28052]: audit 2026-03-09T21:16:09.172416+0000 mon.a (mon.0) 651 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T21:16:09.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:09 vm10 bash[23387]: cephadm 2026-03-09T21:16:08.133945+0000 mgr.y (mgr.14150) 244 : cephadm [INF] Saving service rgw.foo.a spec with placement vm07=foo.a;count:1 2026-03-09T21:16:09.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:09 vm10 bash[23387]: cephadm 2026-03-09T21:16:08.133945+0000 mgr.y (mgr.14150) 244 : cephadm [INF] Saving service rgw.foo.a spec with placement vm07=foo.a;count:1 2026-03-09T21:16:09.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:09 vm10 bash[23387]: cluster 2026-03-09T21:16:09.167263+0000 mon.a (mon.0) 650 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-09T21:16:09.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:09 vm10 bash[23387]: cluster 2026-03-09T21:16:09.167263+0000 mon.a (mon.0) 650 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-09T21:16:09.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:09 vm10 bash[23387]: audit 2026-03-09T21:16:09.168144+0000 mon.b (mon.1) 26 : audit [INF] from='client.? 192.168.123.107:0/2355541501' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T21:16:09.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:09 vm10 bash[23387]: audit 2026-03-09T21:16:09.168144+0000 mon.b (mon.1) 26 : audit [INF] from='client.? 192.168.123.107:0/2355541501' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T21:16:09.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:09 vm10 bash[23387]: audit 2026-03-09T21:16:09.172416+0000 mon.a (mon.0) 651 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T21:16:09.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:09 vm10 bash[23387]: audit 2026-03-09T21:16:09.172416+0000 mon.a (mon.0) 651 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T21:16:10.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:10 vm07 bash[20771]: cluster 2026-03-09T21:16:08.630208+0000 mgr.y (mgr.14150) 245 : cluster [DBG] pgmap v220: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:10.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:10 vm07 bash[20771]: cluster 2026-03-09T21:16:08.630208+0000 mgr.y (mgr.14150) 245 : cluster [DBG] pgmap v220: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:10.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:10 vm07 bash[20771]: audit 2026-03-09T21:16:10.186165+0000 mon.a (mon.0) 652 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T21:16:10.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:10 vm07 bash[20771]: audit 2026-03-09T21:16:10.186165+0000 mon.a (mon.0) 652 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T21:16:10.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:10 vm07 bash[20771]: cluster 2026-03-09T21:16:10.192312+0000 mon.a (mon.0) 653 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-09T21:16:10.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:10 vm07 bash[20771]: cluster 2026-03-09T21:16:10.192312+0000 mon.a (mon.0) 653 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-09T21:16:10.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:10 vm07 bash[28052]: cluster 2026-03-09T21:16:08.630208+0000 mgr.y (mgr.14150) 245 : cluster [DBG] pgmap v220: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:10.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:10 vm07 bash[28052]: cluster 2026-03-09T21:16:08.630208+0000 mgr.y (mgr.14150) 245 : cluster [DBG] pgmap v220: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:10.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:10 vm07 bash[28052]: audit 2026-03-09T21:16:10.186165+0000 mon.a (mon.0) 652 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T21:16:10.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:10 vm07 bash[28052]: audit 2026-03-09T21:16:10.186165+0000 mon.a (mon.0) 652 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T21:16:10.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:10 vm07 bash[28052]: cluster 2026-03-09T21:16:10.192312+0000 mon.a (mon.0) 653 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-09T21:16:10.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:10 vm07 bash[28052]: cluster 2026-03-09T21:16:10.192312+0000 mon.a (mon.0) 653 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-09T21:16:10.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:10 vm10 bash[23387]: cluster 2026-03-09T21:16:08.630208+0000 mgr.y (mgr.14150) 245 : cluster [DBG] pgmap v220: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:10.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:10 vm10 bash[23387]: cluster 2026-03-09T21:16:08.630208+0000 mgr.y (mgr.14150) 245 : cluster [DBG] pgmap v220: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:10.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:10 vm10 bash[23387]: audit 2026-03-09T21:16:10.186165+0000 mon.a (mon.0) 652 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T21:16:10.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:10 vm10 bash[23387]: audit 2026-03-09T21:16:10.186165+0000 mon.a (mon.0) 652 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T21:16:10.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:10 vm10 bash[23387]: cluster 2026-03-09T21:16:10.192312+0000 mon.a (mon.0) 653 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-09T21:16:10.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:10 vm10 bash[23387]: cluster 2026-03-09T21:16:10.192312+0000 mon.a (mon.0) 653 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-09T21:16:10.961 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.b/config 2026-03-09T21:16:12.244 INFO:teuthology.orchestra.run.vm10.stderr:pool 'datapool' created 2026-03-09T21:16:12.323 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- rbd pool init datapool 2026-03-09T21:16:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:12 vm07 bash[20771]: cluster 2026-03-09T21:16:10.630567+0000 mgr.y (mgr.14150) 246 : cluster [DBG] pgmap v223: 33 pgs: 5 creating+peering, 27 unknown, 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:12 vm07 bash[20771]: cluster 2026-03-09T21:16:10.630567+0000 mgr.y (mgr.14150) 246 : cluster [DBG] pgmap v223: 33 pgs: 5 creating+peering, 27 unknown, 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:12 vm07 bash[20771]: cluster 2026-03-09T21:16:11.194267+0000 mon.a (mon.0) 654 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-09T21:16:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:12 vm07 bash[20771]: cluster 2026-03-09T21:16:11.194267+0000 mon.a (mon.0) 654 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-09T21:16:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:12 vm07 bash[20771]: audit 2026-03-09T21:16:11.211968+0000 mon.c (mon.2) 12 : audit [INF] from='client.? 192.168.123.107:0/756988354' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T21:16:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:12 vm07 bash[20771]: audit 2026-03-09T21:16:11.211968+0000 mon.c (mon.2) 12 : audit [INF] from='client.? 192.168.123.107:0/756988354' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T21:16:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:12 vm07 bash[20771]: audit 2026-03-09T21:16:11.212496+0000 mon.a (mon.0) 655 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T21:16:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:12 vm07 bash[20771]: audit 2026-03-09T21:16:11.212496+0000 mon.a (mon.0) 655 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T21:16:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:12 vm07 bash[20771]: audit 2026-03-09T21:16:11.374989+0000 mon.b (mon.1) 27 : audit [INF] from='client.? 192.168.123.110:0/2403845078' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T21:16:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:12 vm07 bash[20771]: audit 2026-03-09T21:16:11.374989+0000 mon.b (mon.1) 27 : audit [INF] from='client.? 192.168.123.110:0/2403845078' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T21:16:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:12 vm07 bash[20771]: audit 2026-03-09T21:16:11.376104+0000 mon.a (mon.0) 656 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T21:16:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:12 vm07 bash[20771]: audit 2026-03-09T21:16:11.376104+0000 mon.a (mon.0) 656 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T21:16:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:12 vm07 bash[28052]: cluster 2026-03-09T21:16:10.630567+0000 mgr.y (mgr.14150) 246 : cluster [DBG] pgmap v223: 33 pgs: 5 creating+peering, 27 unknown, 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:12 vm07 bash[28052]: cluster 2026-03-09T21:16:10.630567+0000 mgr.y (mgr.14150) 246 : cluster [DBG] pgmap v223: 33 pgs: 5 creating+peering, 27 unknown, 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:12 vm07 bash[28052]: cluster 2026-03-09T21:16:11.194267+0000 mon.a (mon.0) 654 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-09T21:16:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:12 vm07 bash[28052]: cluster 2026-03-09T21:16:11.194267+0000 mon.a (mon.0) 654 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-09T21:16:12.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:12 vm07 bash[28052]: audit 2026-03-09T21:16:11.211968+0000 mon.c (mon.2) 12 : audit [INF] from='client.? 192.168.123.107:0/756988354' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T21:16:12.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:12 vm07 bash[28052]: audit 2026-03-09T21:16:11.211968+0000 mon.c (mon.2) 12 : audit [INF] from='client.? 192.168.123.107:0/756988354' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T21:16:12.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:12 vm07 bash[28052]: audit 2026-03-09T21:16:11.212496+0000 mon.a (mon.0) 655 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T21:16:12.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:12 vm07 bash[28052]: audit 2026-03-09T21:16:11.212496+0000 mon.a (mon.0) 655 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T21:16:12.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:12 vm07 bash[28052]: audit 2026-03-09T21:16:11.374989+0000 mon.b (mon.1) 27 : audit [INF] from='client.? 192.168.123.110:0/2403845078' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T21:16:12.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:12 vm07 bash[28052]: audit 2026-03-09T21:16:11.374989+0000 mon.b (mon.1) 27 : audit [INF] from='client.? 192.168.123.110:0/2403845078' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T21:16:12.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:12 vm07 bash[28052]: audit 2026-03-09T21:16:11.376104+0000 mon.a (mon.0) 656 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T21:16:12.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:12 vm07 bash[28052]: audit 2026-03-09T21:16:11.376104+0000 mon.a (mon.0) 656 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T21:16:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:12 vm10 bash[23387]: cluster 2026-03-09T21:16:10.630567+0000 mgr.y (mgr.14150) 246 : cluster [DBG] pgmap v223: 33 pgs: 5 creating+peering, 27 unknown, 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:12 vm10 bash[23387]: cluster 2026-03-09T21:16:10.630567+0000 mgr.y (mgr.14150) 246 : cluster [DBG] pgmap v223: 33 pgs: 5 creating+peering, 27 unknown, 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:16:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:12 vm10 bash[23387]: cluster 2026-03-09T21:16:11.194267+0000 mon.a (mon.0) 654 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-09T21:16:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:12 vm10 bash[23387]: cluster 2026-03-09T21:16:11.194267+0000 mon.a (mon.0) 654 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-09T21:16:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:12 vm10 bash[23387]: audit 2026-03-09T21:16:11.211968+0000 mon.c (mon.2) 12 : audit [INF] from='client.? 192.168.123.107:0/756988354' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T21:16:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:12 vm10 bash[23387]: audit 2026-03-09T21:16:11.211968+0000 mon.c (mon.2) 12 : audit [INF] from='client.? 192.168.123.107:0/756988354' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T21:16:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:12 vm10 bash[23387]: audit 2026-03-09T21:16:11.212496+0000 mon.a (mon.0) 655 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T21:16:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:12 vm10 bash[23387]: audit 2026-03-09T21:16:11.212496+0000 mon.a (mon.0) 655 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T21:16:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:12 vm10 bash[23387]: audit 2026-03-09T21:16:11.374989+0000 mon.b (mon.1) 27 : audit [INF] from='client.? 192.168.123.110:0/2403845078' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T21:16:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:12 vm10 bash[23387]: audit 2026-03-09T21:16:11.374989+0000 mon.b (mon.1) 27 : audit [INF] from='client.? 192.168.123.110:0/2403845078' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T21:16:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:12 vm10 bash[23387]: audit 2026-03-09T21:16:11.376104+0000 mon.a (mon.0) 656 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T21:16:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:12 vm10 bash[23387]: audit 2026-03-09T21:16:11.376104+0000 mon.a (mon.0) 656 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T21:16:13.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:13 vm07 bash[20771]: audit 2026-03-09T21:16:12.207932+0000 mon.a (mon.0) 657 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T21:16:13.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:13 vm07 bash[20771]: audit 2026-03-09T21:16:12.207932+0000 mon.a (mon.0) 657 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T21:16:13.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:13 vm07 bash[20771]: audit 2026-03-09T21:16:12.208058+0000 mon.a (mon.0) 658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T21:16:13.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:13 vm07 bash[20771]: audit 2026-03-09T21:16:12.208058+0000 mon.a (mon.0) 658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T21:16:13.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:13 vm07 bash[20771]: cluster 2026-03-09T21:16:12.228408+0000 mon.a (mon.0) 659 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-09T21:16:13.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:13 vm07 bash[20771]: cluster 2026-03-09T21:16:12.228408+0000 mon.a (mon.0) 659 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-09T21:16:13.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:13 vm07 bash[20771]: audit 2026-03-09T21:16:12.644189+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:13.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:13 vm07 bash[20771]: audit 2026-03-09T21:16:12.644189+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:13.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:13 vm07 bash[28052]: audit 2026-03-09T21:16:12.207932+0000 mon.a (mon.0) 657 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T21:16:13.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:13 vm07 bash[28052]: audit 2026-03-09T21:16:12.207932+0000 mon.a (mon.0) 657 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T21:16:13.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:13 vm07 bash[28052]: audit 2026-03-09T21:16:12.208058+0000 mon.a (mon.0) 658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T21:16:13.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:13 vm07 bash[28052]: audit 2026-03-09T21:16:12.208058+0000 mon.a (mon.0) 658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T21:16:13.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:13 vm07 bash[28052]: cluster 2026-03-09T21:16:12.228408+0000 mon.a (mon.0) 659 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-09T21:16:13.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:13 vm07 bash[28052]: cluster 2026-03-09T21:16:12.228408+0000 mon.a (mon.0) 659 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-09T21:16:13.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:13 vm07 bash[28052]: audit 2026-03-09T21:16:12.644189+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:13.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:13 vm07 bash[28052]: audit 2026-03-09T21:16:12.644189+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:13.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:13 vm10 bash[23387]: audit 2026-03-09T21:16:12.207932+0000 mon.a (mon.0) 657 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T21:16:13.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:13 vm10 bash[23387]: audit 2026-03-09T21:16:12.207932+0000 mon.a (mon.0) 657 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T21:16:13.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:13 vm10 bash[23387]: audit 2026-03-09T21:16:12.208058+0000 mon.a (mon.0) 658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T21:16:13.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:13 vm10 bash[23387]: audit 2026-03-09T21:16:12.208058+0000 mon.a (mon.0) 658 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T21:16:13.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:13 vm10 bash[23387]: cluster 2026-03-09T21:16:12.228408+0000 mon.a (mon.0) 659 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-09T21:16:13.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:13 vm10 bash[23387]: cluster 2026-03-09T21:16:12.228408+0000 mon.a (mon.0) 659 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-09T21:16:13.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:13 vm10 bash[23387]: audit 2026-03-09T21:16:12.644189+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:13.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:13 vm10 bash[23387]: audit 2026-03-09T21:16:12.644189+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:14 vm07 bash[20771]: cluster 2026-03-09T21:16:12.630990+0000 mgr.y (mgr.14150) 247 : cluster [DBG] pgmap v226: 68 pgs: 7 creating+peering, 40 unknown, 21 active+clean; 450 KiB data, 615 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 767 B/s wr, 1 op/s 2026-03-09T21:16:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:14 vm07 bash[20771]: cluster 2026-03-09T21:16:12.630990+0000 mgr.y (mgr.14150) 247 : cluster [DBG] pgmap v226: 68 pgs: 7 creating+peering, 40 unknown, 21 active+clean; 450 KiB data, 615 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 767 B/s wr, 1 op/s 2026-03-09T21:16:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:14 vm07 bash[20771]: cluster 2026-03-09T21:16:13.208580+0000 mon.a (mon.0) 661 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:16:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:14 vm07 bash[20771]: cluster 2026-03-09T21:16:13.208580+0000 mon.a (mon.0) 661 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:16:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:14 vm07 bash[20771]: cluster 2026-03-09T21:16:13.248620+0000 mon.a (mon.0) 662 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-09T21:16:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:14 vm07 bash[20771]: cluster 2026-03-09T21:16:13.248620+0000 mon.a (mon.0) 662 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-09T21:16:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:14 vm07 bash[20771]: audit 2026-03-09T21:16:13.266019+0000 mon.c (mon.2) 13 : audit [INF] from='client.? 192.168.123.107:0/756988354' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T21:16:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:14 vm07 bash[20771]: audit 2026-03-09T21:16:13.266019+0000 mon.c (mon.2) 13 : audit [INF] from='client.? 192.168.123.107:0/756988354' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T21:16:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:14 vm07 bash[20771]: audit 2026-03-09T21:16:13.266980+0000 mon.a (mon.0) 663 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T21:16:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:14 vm07 bash[20771]: audit 2026-03-09T21:16:13.266980+0000 mon.a (mon.0) 663 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T21:16:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:14 vm07 bash[20771]: audit 2026-03-09T21:16:13.365638+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:14 vm07 bash[20771]: audit 2026-03-09T21:16:13.365638+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:14 vm07 bash[20771]: audit 2026-03-09T21:16:13.487844+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:14 vm07 bash[20771]: audit 2026-03-09T21:16:13.487844+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:14 vm07 bash[20771]: audit 2026-03-09T21:16:13.489304+0000 mon.a (mon.0) 666 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:14 vm07 bash[20771]: audit 2026-03-09T21:16:13.489304+0000 mon.a (mon.0) 666 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:14 vm07 bash[20771]: audit 2026-03-09T21:16:13.490102+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:14 vm07 bash[20771]: audit 2026-03-09T21:16:13.490102+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:14 vm07 bash[20771]: cephadm 2026-03-09T21:16:13.493873+0000 mgr.y (mgr.14150) 248 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T21:16:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:14 vm07 bash[20771]: cephadm 2026-03-09T21:16:13.493873+0000 mgr.y (mgr.14150) 248 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T21:16:14.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:14 vm07 bash[28052]: cluster 2026-03-09T21:16:12.630990+0000 mgr.y (mgr.14150) 247 : cluster [DBG] pgmap v226: 68 pgs: 7 creating+peering, 40 unknown, 21 active+clean; 450 KiB data, 615 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 767 B/s wr, 1 op/s 2026-03-09T21:16:14.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:14 vm07 bash[28052]: cluster 2026-03-09T21:16:12.630990+0000 mgr.y (mgr.14150) 247 : cluster [DBG] pgmap v226: 68 pgs: 7 creating+peering, 40 unknown, 21 active+clean; 450 KiB data, 615 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 767 B/s wr, 1 op/s 2026-03-09T21:16:14.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:14 vm07 bash[28052]: cluster 2026-03-09T21:16:13.208580+0000 mon.a (mon.0) 661 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:16:14.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:14 vm07 bash[28052]: cluster 2026-03-09T21:16:13.208580+0000 mon.a (mon.0) 661 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:16:14.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:14 vm07 bash[28052]: cluster 2026-03-09T21:16:13.248620+0000 mon.a (mon.0) 662 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-09T21:16:14.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:14 vm07 bash[28052]: cluster 2026-03-09T21:16:13.248620+0000 mon.a (mon.0) 662 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-09T21:16:14.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:14 vm07 bash[28052]: audit 2026-03-09T21:16:13.266019+0000 mon.c (mon.2) 13 : audit [INF] from='client.? 192.168.123.107:0/756988354' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T21:16:14.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:14 vm07 bash[28052]: audit 2026-03-09T21:16:13.266019+0000 mon.c (mon.2) 13 : audit [INF] from='client.? 192.168.123.107:0/756988354' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T21:16:14.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:14 vm07 bash[28052]: audit 2026-03-09T21:16:13.266980+0000 mon.a (mon.0) 663 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T21:16:14.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:14 vm07 bash[28052]: audit 2026-03-09T21:16:13.266980+0000 mon.a (mon.0) 663 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T21:16:14.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:14 vm07 bash[28052]: audit 2026-03-09T21:16:13.365638+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:14.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:14 vm07 bash[28052]: audit 2026-03-09T21:16:13.365638+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:14.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:14 vm07 bash[28052]: audit 2026-03-09T21:16:13.487844+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:14.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:14 vm07 bash[28052]: audit 2026-03-09T21:16:13.487844+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:14.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:14 vm07 bash[28052]: audit 2026-03-09T21:16:13.489304+0000 mon.a (mon.0) 666 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:14.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:14 vm07 bash[28052]: audit 2026-03-09T21:16:13.489304+0000 mon.a (mon.0) 666 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:14.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:14 vm07 bash[28052]: audit 2026-03-09T21:16:13.490102+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:14.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:14 vm07 bash[28052]: audit 2026-03-09T21:16:13.490102+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:14.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:14 vm07 bash[28052]: cephadm 2026-03-09T21:16:13.493873+0000 mgr.y (mgr.14150) 248 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T21:16:14.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:14 vm07 bash[28052]: cephadm 2026-03-09T21:16:13.493873+0000 mgr.y (mgr.14150) 248 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T21:16:14.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:14 vm10 bash[23387]: cluster 2026-03-09T21:16:12.630990+0000 mgr.y (mgr.14150) 247 : cluster [DBG] pgmap v226: 68 pgs: 7 creating+peering, 40 unknown, 21 active+clean; 450 KiB data, 615 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 767 B/s wr, 1 op/s 2026-03-09T21:16:14.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:14 vm10 bash[23387]: cluster 2026-03-09T21:16:12.630990+0000 mgr.y (mgr.14150) 247 : cluster [DBG] pgmap v226: 68 pgs: 7 creating+peering, 40 unknown, 21 active+clean; 450 KiB data, 615 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 767 B/s wr, 1 op/s 2026-03-09T21:16:14.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:14 vm10 bash[23387]: cluster 2026-03-09T21:16:13.208580+0000 mon.a (mon.0) 661 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:16:14.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:14 vm10 bash[23387]: cluster 2026-03-09T21:16:13.208580+0000 mon.a (mon.0) 661 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:16:14.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:14 vm10 bash[23387]: cluster 2026-03-09T21:16:13.248620+0000 mon.a (mon.0) 662 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-09T21:16:14.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:14 vm10 bash[23387]: cluster 2026-03-09T21:16:13.248620+0000 mon.a (mon.0) 662 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-09T21:16:14.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:14 vm10 bash[23387]: audit 2026-03-09T21:16:13.266019+0000 mon.c (mon.2) 13 : audit [INF] from='client.? 192.168.123.107:0/756988354' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T21:16:14.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:14 vm10 bash[23387]: audit 2026-03-09T21:16:13.266019+0000 mon.c (mon.2) 13 : audit [INF] from='client.? 192.168.123.107:0/756988354' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T21:16:14.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:14 vm10 bash[23387]: audit 2026-03-09T21:16:13.266980+0000 mon.a (mon.0) 663 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T21:16:14.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:14 vm10 bash[23387]: audit 2026-03-09T21:16:13.266980+0000 mon.a (mon.0) 663 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T21:16:14.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:14 vm10 bash[23387]: audit 2026-03-09T21:16:13.365638+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:14.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:14 vm10 bash[23387]: audit 2026-03-09T21:16:13.365638+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:14.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:14 vm10 bash[23387]: audit 2026-03-09T21:16:13.487844+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:14.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:14 vm10 bash[23387]: audit 2026-03-09T21:16:13.487844+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:14.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:14 vm10 bash[23387]: audit 2026-03-09T21:16:13.489304+0000 mon.a (mon.0) 666 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:14.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:14 vm10 bash[23387]: audit 2026-03-09T21:16:13.489304+0000 mon.a (mon.0) 666 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:14.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:14 vm10 bash[23387]: audit 2026-03-09T21:16:13.490102+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:14.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:14 vm10 bash[23387]: audit 2026-03-09T21:16:13.490102+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:14.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:14 vm10 bash[23387]: cephadm 2026-03-09T21:16:13.493873+0000 mgr.y (mgr.14150) 248 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T21:16:14.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:14 vm10 bash[23387]: cephadm 2026-03-09T21:16:13.493873+0000 mgr.y (mgr.14150) 248 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T21:16:15.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:15 vm07 bash[20771]: audit 2026-03-09T21:16:14.234380+0000 mon.a (mon.0) 668 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T21:16:15.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:15 vm07 bash[20771]: audit 2026-03-09T21:16:14.234380+0000 mon.a (mon.0) 668 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T21:16:15.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:15 vm07 bash[20771]: cluster 2026-03-09T21:16:14.250436+0000 mon.a (mon.0) 669 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-09T21:16:15.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:15 vm07 bash[20771]: cluster 2026-03-09T21:16:14.250436+0000 mon.a (mon.0) 669 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-09T21:16:15.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:15 vm07 bash[20771]: audit 2026-03-09T21:16:15.259220+0000 mon.c (mon.2) 14 : audit [INF] from='client.? 192.168.123.107:0/756988354' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T21:16:15.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:15 vm07 bash[20771]: audit 2026-03-09T21:16:15.259220+0000 mon.c (mon.2) 14 : audit [INF] from='client.? 192.168.123.107:0/756988354' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T21:16:15.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:15 vm07 bash[20771]: audit 2026-03-09T21:16:15.259381+0000 mon.b (mon.1) 28 : audit [INF] from='client.? 192.168.123.107:0/4272664820' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T21:16:15.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:15 vm07 bash[20771]: audit 2026-03-09T21:16:15.259381+0000 mon.b (mon.1) 28 : audit [INF] from='client.? 192.168.123.107:0/4272664820' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T21:16:15.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:15 vm07 bash[20771]: cluster 2026-03-09T21:16:15.259567+0000 mon.a (mon.0) 670 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-09T21:16:15.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:15 vm07 bash[20771]: cluster 2026-03-09T21:16:15.259567+0000 mon.a (mon.0) 670 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-09T21:16:15.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:15 vm07 bash[20771]: audit 2026-03-09T21:16:15.261564+0000 mon.a (mon.0) 671 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T21:16:15.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:15 vm07 bash[20771]: audit 2026-03-09T21:16:15.261564+0000 mon.a (mon.0) 671 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T21:16:15.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:15 vm07 bash[20771]: audit 2026-03-09T21:16:15.261670+0000 mon.a (mon.0) 672 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T21:16:15.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:15 vm07 bash[20771]: audit 2026-03-09T21:16:15.261670+0000 mon.a (mon.0) 672 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T21:16:15.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:15 vm07 bash[28052]: audit 2026-03-09T21:16:14.234380+0000 mon.a (mon.0) 668 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T21:16:15.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:15 vm07 bash[28052]: audit 2026-03-09T21:16:14.234380+0000 mon.a (mon.0) 668 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T21:16:15.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:15 vm07 bash[28052]: cluster 2026-03-09T21:16:14.250436+0000 mon.a (mon.0) 669 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-09T21:16:15.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:15 vm07 bash[28052]: cluster 2026-03-09T21:16:14.250436+0000 mon.a (mon.0) 669 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-09T21:16:15.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:15 vm07 bash[28052]: audit 2026-03-09T21:16:15.259220+0000 mon.c (mon.2) 14 : audit [INF] from='client.? 192.168.123.107:0/756988354' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T21:16:15.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:15 vm07 bash[28052]: audit 2026-03-09T21:16:15.259220+0000 mon.c (mon.2) 14 : audit [INF] from='client.? 192.168.123.107:0/756988354' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T21:16:15.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:15 vm07 bash[28052]: audit 2026-03-09T21:16:15.259381+0000 mon.b (mon.1) 28 : audit [INF] from='client.? 192.168.123.107:0/4272664820' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T21:16:15.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:15 vm07 bash[28052]: audit 2026-03-09T21:16:15.259381+0000 mon.b (mon.1) 28 : audit [INF] from='client.? 192.168.123.107:0/4272664820' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T21:16:15.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:15 vm07 bash[28052]: cluster 2026-03-09T21:16:15.259567+0000 mon.a (mon.0) 670 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-09T21:16:15.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:15 vm07 bash[28052]: cluster 2026-03-09T21:16:15.259567+0000 mon.a (mon.0) 670 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-09T21:16:15.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:15 vm07 bash[28052]: audit 2026-03-09T21:16:15.261564+0000 mon.a (mon.0) 671 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T21:16:15.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:15 vm07 bash[28052]: audit 2026-03-09T21:16:15.261564+0000 mon.a (mon.0) 671 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T21:16:15.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:15 vm07 bash[28052]: audit 2026-03-09T21:16:15.261670+0000 mon.a (mon.0) 672 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T21:16:15.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:15 vm07 bash[28052]: audit 2026-03-09T21:16:15.261670+0000 mon.a (mon.0) 672 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T21:16:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:15 vm10 bash[23387]: audit 2026-03-09T21:16:14.234380+0000 mon.a (mon.0) 668 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T21:16:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:15 vm10 bash[23387]: audit 2026-03-09T21:16:14.234380+0000 mon.a (mon.0) 668 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T21:16:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:15 vm10 bash[23387]: cluster 2026-03-09T21:16:14.250436+0000 mon.a (mon.0) 669 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-09T21:16:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:15 vm10 bash[23387]: cluster 2026-03-09T21:16:14.250436+0000 mon.a (mon.0) 669 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-09T21:16:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:15 vm10 bash[23387]: audit 2026-03-09T21:16:15.259220+0000 mon.c (mon.2) 14 : audit [INF] from='client.? 192.168.123.107:0/756988354' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T21:16:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:15 vm10 bash[23387]: audit 2026-03-09T21:16:15.259220+0000 mon.c (mon.2) 14 : audit [INF] from='client.? 192.168.123.107:0/756988354' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T21:16:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:15 vm10 bash[23387]: audit 2026-03-09T21:16:15.259381+0000 mon.b (mon.1) 28 : audit [INF] from='client.? 192.168.123.107:0/4272664820' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T21:16:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:15 vm10 bash[23387]: audit 2026-03-09T21:16:15.259381+0000 mon.b (mon.1) 28 : audit [INF] from='client.? 192.168.123.107:0/4272664820' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T21:16:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:15 vm10 bash[23387]: cluster 2026-03-09T21:16:15.259567+0000 mon.a (mon.0) 670 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-09T21:16:15.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:15 vm10 bash[23387]: cluster 2026-03-09T21:16:15.259567+0000 mon.a (mon.0) 670 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-09T21:16:15.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:15 vm10 bash[23387]: audit 2026-03-09T21:16:15.261564+0000 mon.a (mon.0) 671 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T21:16:15.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:15 vm10 bash[23387]: audit 2026-03-09T21:16:15.261564+0000 mon.a (mon.0) 671 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T21:16:15.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:15 vm10 bash[23387]: audit 2026-03-09T21:16:15.261670+0000 mon.a (mon.0) 672 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T21:16:15.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:15 vm10 bash[23387]: audit 2026-03-09T21:16:15.261670+0000 mon.a (mon.0) 672 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T21:16:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:16 vm07 bash[20771]: cluster 2026-03-09T21:16:14.631473+0000 mgr.y (mgr.14150) 249 : cluster [DBG] pgmap v229: 100 pgs: 3 creating+peering, 44 unknown, 53 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.7 KiB/s wr, 6 op/s 2026-03-09T21:16:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:16 vm07 bash[20771]: cluster 2026-03-09T21:16:14.631473+0000 mgr.y (mgr.14150) 249 : cluster [DBG] pgmap v229: 100 pgs: 3 creating+peering, 44 unknown, 53 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.7 KiB/s wr, 6 op/s 2026-03-09T21:16:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:16 vm07 bash[20771]: audit 2026-03-09T21:16:16.243666+0000 mon.a (mon.0) 673 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T21:16:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:16 vm07 bash[20771]: audit 2026-03-09T21:16:16.243666+0000 mon.a (mon.0) 673 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T21:16:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:16 vm07 bash[20771]: audit 2026-03-09T21:16:16.243804+0000 mon.a (mon.0) 674 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T21:16:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:16 vm07 bash[20771]: audit 2026-03-09T21:16:16.243804+0000 mon.a (mon.0) 674 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T21:16:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:16 vm07 bash[20771]: audit 2026-03-09T21:16:16.251758+0000 mon.c (mon.2) 15 : audit [INF] from='client.? 192.168.123.107:0/756988354' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T21:16:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:16 vm07 bash[20771]: audit 2026-03-09T21:16:16.251758+0000 mon.c (mon.2) 15 : audit [INF] from='client.? 192.168.123.107:0/756988354' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T21:16:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:16 vm07 bash[20771]: audit 2026-03-09T21:16:16.253080+0000 mon.b (mon.1) 29 : audit [INF] from='client.? 192.168.123.107:0/4272664820' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T21:16:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:16 vm07 bash[20771]: audit 2026-03-09T21:16:16.253080+0000 mon.b (mon.1) 29 : audit [INF] from='client.? 192.168.123.107:0/4272664820' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T21:16:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:16 vm07 bash[20771]: cluster 2026-03-09T21:16:16.259485+0000 mon.a (mon.0) 675 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-09T21:16:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:16 vm07 bash[20771]: cluster 2026-03-09T21:16:16.259485+0000 mon.a (mon.0) 675 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-09T21:16:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:16 vm07 bash[20771]: audit 2026-03-09T21:16:16.270017+0000 mon.a (mon.0) 676 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T21:16:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:16 vm07 bash[20771]: audit 2026-03-09T21:16:16.270017+0000 mon.a (mon.0) 676 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T21:16:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:16 vm07 bash[20771]: audit 2026-03-09T21:16:16.270128+0000 mon.a (mon.0) 677 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T21:16:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:16 vm07 bash[20771]: audit 2026-03-09T21:16:16.270128+0000 mon.a (mon.0) 677 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T21:16:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:16 vm07 bash[28052]: cluster 2026-03-09T21:16:14.631473+0000 mgr.y (mgr.14150) 249 : cluster [DBG] pgmap v229: 100 pgs: 3 creating+peering, 44 unknown, 53 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.7 KiB/s wr, 6 op/s 2026-03-09T21:16:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:16 vm07 bash[28052]: cluster 2026-03-09T21:16:14.631473+0000 mgr.y (mgr.14150) 249 : cluster [DBG] pgmap v229: 100 pgs: 3 creating+peering, 44 unknown, 53 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.7 KiB/s wr, 6 op/s 2026-03-09T21:16:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:16 vm07 bash[28052]: audit 2026-03-09T21:16:16.243666+0000 mon.a (mon.0) 673 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T21:16:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:16 vm07 bash[28052]: audit 2026-03-09T21:16:16.243666+0000 mon.a (mon.0) 673 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T21:16:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:16 vm07 bash[28052]: audit 2026-03-09T21:16:16.243804+0000 mon.a (mon.0) 674 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T21:16:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:16 vm07 bash[28052]: audit 2026-03-09T21:16:16.243804+0000 mon.a (mon.0) 674 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T21:16:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:16 vm07 bash[28052]: audit 2026-03-09T21:16:16.251758+0000 mon.c (mon.2) 15 : audit [INF] from='client.? 192.168.123.107:0/756988354' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T21:16:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:16 vm07 bash[28052]: audit 2026-03-09T21:16:16.251758+0000 mon.c (mon.2) 15 : audit [INF] from='client.? 192.168.123.107:0/756988354' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T21:16:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:16 vm07 bash[28052]: audit 2026-03-09T21:16:16.253080+0000 mon.b (mon.1) 29 : audit [INF] from='client.? 192.168.123.107:0/4272664820' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T21:16:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:16 vm07 bash[28052]: audit 2026-03-09T21:16:16.253080+0000 mon.b (mon.1) 29 : audit [INF] from='client.? 192.168.123.107:0/4272664820' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T21:16:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:16 vm07 bash[28052]: cluster 2026-03-09T21:16:16.259485+0000 mon.a (mon.0) 675 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-09T21:16:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:16 vm07 bash[28052]: cluster 2026-03-09T21:16:16.259485+0000 mon.a (mon.0) 675 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-09T21:16:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:16 vm07 bash[28052]: audit 2026-03-09T21:16:16.270017+0000 mon.a (mon.0) 676 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T21:16:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:16 vm07 bash[28052]: audit 2026-03-09T21:16:16.270017+0000 mon.a (mon.0) 676 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T21:16:16.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:16 vm07 bash[28052]: audit 2026-03-09T21:16:16.270128+0000 mon.a (mon.0) 677 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T21:16:16.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:16 vm07 bash[28052]: audit 2026-03-09T21:16:16.270128+0000 mon.a (mon.0) 677 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T21:16:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:16 vm10 bash[23387]: cluster 2026-03-09T21:16:14.631473+0000 mgr.y (mgr.14150) 249 : cluster [DBG] pgmap v229: 100 pgs: 3 creating+peering, 44 unknown, 53 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.7 KiB/s wr, 6 op/s 2026-03-09T21:16:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:16 vm10 bash[23387]: cluster 2026-03-09T21:16:14.631473+0000 mgr.y (mgr.14150) 249 : cluster [DBG] pgmap v229: 100 pgs: 3 creating+peering, 44 unknown, 53 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.7 KiB/s wr, 6 op/s 2026-03-09T21:16:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:16 vm10 bash[23387]: audit 2026-03-09T21:16:16.243666+0000 mon.a (mon.0) 673 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T21:16:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:16 vm10 bash[23387]: audit 2026-03-09T21:16:16.243666+0000 mon.a (mon.0) 673 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T21:16:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:16 vm10 bash[23387]: audit 2026-03-09T21:16:16.243804+0000 mon.a (mon.0) 674 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T21:16:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:16 vm10 bash[23387]: audit 2026-03-09T21:16:16.243804+0000 mon.a (mon.0) 674 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T21:16:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:16 vm10 bash[23387]: audit 2026-03-09T21:16:16.251758+0000 mon.c (mon.2) 15 : audit [INF] from='client.? 192.168.123.107:0/756988354' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T21:16:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:16 vm10 bash[23387]: audit 2026-03-09T21:16:16.251758+0000 mon.c (mon.2) 15 : audit [INF] from='client.? 192.168.123.107:0/756988354' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T21:16:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:16 vm10 bash[23387]: audit 2026-03-09T21:16:16.253080+0000 mon.b (mon.1) 29 : audit [INF] from='client.? 192.168.123.107:0/4272664820' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T21:16:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:16 vm10 bash[23387]: audit 2026-03-09T21:16:16.253080+0000 mon.b (mon.1) 29 : audit [INF] from='client.? 192.168.123.107:0/4272664820' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T21:16:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:16 vm10 bash[23387]: cluster 2026-03-09T21:16:16.259485+0000 mon.a (mon.0) 675 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-09T21:16:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:16 vm10 bash[23387]: cluster 2026-03-09T21:16:16.259485+0000 mon.a (mon.0) 675 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-09T21:16:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:16 vm10 bash[23387]: audit 2026-03-09T21:16:16.270017+0000 mon.a (mon.0) 676 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T21:16:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:16 vm10 bash[23387]: audit 2026-03-09T21:16:16.270017+0000 mon.a (mon.0) 676 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T21:16:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:16 vm10 bash[23387]: audit 2026-03-09T21:16:16.270128+0000 mon.a (mon.0) 677 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T21:16:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:16 vm10 bash[23387]: audit 2026-03-09T21:16:16.270128+0000 mon.a (mon.0) 677 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T21:16:16.963 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.b/config 2026-03-09T21:16:17.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:17 vm07 bash[20771]: audit 2026-03-09T21:16:17.125481+0000 mon.b (mon.1) 30 : audit [INF] from='client.? 192.168.123.110:0/493524348' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T21:16:17.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:17 vm07 bash[20771]: audit 2026-03-09T21:16:17.125481+0000 mon.b (mon.1) 30 : audit [INF] from='client.? 192.168.123.110:0/493524348' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T21:16:17.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:17 vm07 bash[20771]: audit 2026-03-09T21:16:17.126413+0000 mon.a (mon.0) 678 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T21:16:17.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:17 vm07 bash[20771]: audit 2026-03-09T21:16:17.126413+0000 mon.a (mon.0) 678 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T21:16:17.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:17 vm07 bash[20771]: audit 2026-03-09T21:16:17.247834+0000 mon.a (mon.0) 679 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T21:16:17.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:17 vm07 bash[20771]: audit 2026-03-09T21:16:17.247834+0000 mon.a (mon.0) 679 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T21:16:17.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:17 vm07 bash[20771]: audit 2026-03-09T21:16:17.247968+0000 mon.a (mon.0) 680 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T21:16:17.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:17 vm07 bash[20771]: audit 2026-03-09T21:16:17.247968+0000 mon.a (mon.0) 680 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T21:16:17.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:17 vm07 bash[20771]: audit 2026-03-09T21:16:17.248010+0000 mon.a (mon.0) 681 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T21:16:17.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:17 vm07 bash[20771]: audit 2026-03-09T21:16:17.248010+0000 mon.a (mon.0) 681 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T21:16:17.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:17 vm07 bash[20771]: cluster 2026-03-09T21:16:17.254113+0000 mon.a (mon.0) 682 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-09T21:16:17.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:17 vm07 bash[20771]: cluster 2026-03-09T21:16:17.254113+0000 mon.a (mon.0) 682 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-09T21:16:17.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:17 vm07 bash[28052]: audit 2026-03-09T21:16:17.125481+0000 mon.b (mon.1) 30 : audit [INF] from='client.? 192.168.123.110:0/493524348' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T21:16:17.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:17 vm07 bash[28052]: audit 2026-03-09T21:16:17.125481+0000 mon.b (mon.1) 30 : audit [INF] from='client.? 192.168.123.110:0/493524348' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T21:16:17.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:17 vm07 bash[28052]: audit 2026-03-09T21:16:17.126413+0000 mon.a (mon.0) 678 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T21:16:17.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:17 vm07 bash[28052]: audit 2026-03-09T21:16:17.126413+0000 mon.a (mon.0) 678 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T21:16:17.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:17 vm07 bash[28052]: audit 2026-03-09T21:16:17.247834+0000 mon.a (mon.0) 679 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T21:16:17.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:17 vm07 bash[28052]: audit 2026-03-09T21:16:17.247834+0000 mon.a (mon.0) 679 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T21:16:17.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:17 vm07 bash[28052]: audit 2026-03-09T21:16:17.247968+0000 mon.a (mon.0) 680 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T21:16:17.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:17 vm07 bash[28052]: audit 2026-03-09T21:16:17.247968+0000 mon.a (mon.0) 680 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T21:16:17.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:17 vm07 bash[28052]: audit 2026-03-09T21:16:17.248010+0000 mon.a (mon.0) 681 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T21:16:17.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:17 vm07 bash[28052]: audit 2026-03-09T21:16:17.248010+0000 mon.a (mon.0) 681 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T21:16:17.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:17 vm07 bash[28052]: cluster 2026-03-09T21:16:17.254113+0000 mon.a (mon.0) 682 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-09T21:16:17.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:17 vm07 bash[28052]: cluster 2026-03-09T21:16:17.254113+0000 mon.a (mon.0) 682 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-09T21:16:17.616 INFO:journalctl@ceph.rgw.foo.a.vm07.stdout:Mar 09 21:16:17 vm07 bash[52961]: debug 2026-03-09T21:16:17.361+0000 7f0cc4b1e980 -1 LDAP not started since no server URIs were provided in the configuration. 2026-03-09T21:16:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:17 vm10 bash[23387]: audit 2026-03-09T21:16:17.125481+0000 mon.b (mon.1) 30 : audit [INF] from='client.? 192.168.123.110:0/493524348' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T21:16:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:17 vm10 bash[23387]: audit 2026-03-09T21:16:17.125481+0000 mon.b (mon.1) 30 : audit [INF] from='client.? 192.168.123.110:0/493524348' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T21:16:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:17 vm10 bash[23387]: audit 2026-03-09T21:16:17.126413+0000 mon.a (mon.0) 678 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T21:16:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:17 vm10 bash[23387]: audit 2026-03-09T21:16:17.126413+0000 mon.a (mon.0) 678 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T21:16:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:17 vm10 bash[23387]: audit 2026-03-09T21:16:17.247834+0000 mon.a (mon.0) 679 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T21:16:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:17 vm10 bash[23387]: audit 2026-03-09T21:16:17.247834+0000 mon.a (mon.0) 679 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T21:16:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:17 vm10 bash[23387]: audit 2026-03-09T21:16:17.247968+0000 mon.a (mon.0) 680 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T21:16:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:17 vm10 bash[23387]: audit 2026-03-09T21:16:17.247968+0000 mon.a (mon.0) 680 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T21:16:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:17 vm10 bash[23387]: audit 2026-03-09T21:16:17.248010+0000 mon.a (mon.0) 681 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T21:16:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:17 vm10 bash[23387]: audit 2026-03-09T21:16:17.248010+0000 mon.a (mon.0) 681 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T21:16:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:17 vm10 bash[23387]: cluster 2026-03-09T21:16:17.254113+0000 mon.a (mon.0) 682 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-09T21:16:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:17 vm10 bash[23387]: cluster 2026-03-09T21:16:17.254113+0000 mon.a (mon.0) 682 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-09T21:16:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:18 vm07 bash[20771]: cluster 2026-03-09T21:16:16.631959+0000 mgr.y (mgr.14150) 250 : cluster [DBG] pgmap v232: 132 pgs: 20 creating+peering, 26 unknown, 86 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 3.7 KiB/s rd, 1.2 KiB/s wr, 6 op/s 2026-03-09T21:16:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:18 vm07 bash[20771]: cluster 2026-03-09T21:16:16.631959+0000 mgr.y (mgr.14150) 250 : cluster [DBG] pgmap v232: 132 pgs: 20 creating+peering, 26 unknown, 86 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 3.7 KiB/s rd, 1.2 KiB/s wr, 6 op/s 2026-03-09T21:16:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:18 vm07 bash[20771]: audit 2026-03-09T21:16:17.646334+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:18 vm07 bash[20771]: audit 2026-03-09T21:16:17.646334+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:18 vm07 bash[20771]: audit 2026-03-09T21:16:17.666820+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:18 vm07 bash[20771]: audit 2026-03-09T21:16:17.666820+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:18 vm07 bash[20771]: audit 2026-03-09T21:16:17.687669+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:18 vm07 bash[20771]: audit 2026-03-09T21:16:17.687669+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:18 vm07 bash[20771]: audit 2026-03-09T21:16:17.717191+0000 mon.a (mon.0) 686 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:18 vm07 bash[20771]: audit 2026-03-09T21:16:17.717191+0000 mon.a (mon.0) 686 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:18 vm07 bash[20771]: audit 2026-03-09T21:16:18.093299+0000 mon.a (mon.0) 687 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:18 vm07 bash[20771]: audit 2026-03-09T21:16:18.093299+0000 mon.a (mon.0) 687 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:18 vm07 bash[20771]: audit 2026-03-09T21:16:18.094004+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:18 vm07 bash[20771]: audit 2026-03-09T21:16:18.094004+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:18.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:18 vm07 bash[28052]: cluster 2026-03-09T21:16:16.631959+0000 mgr.y (mgr.14150) 250 : cluster [DBG] pgmap v232: 132 pgs: 20 creating+peering, 26 unknown, 86 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 3.7 KiB/s rd, 1.2 KiB/s wr, 6 op/s 2026-03-09T21:16:18.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:18 vm07 bash[28052]: cluster 2026-03-09T21:16:16.631959+0000 mgr.y (mgr.14150) 250 : cluster [DBG] pgmap v232: 132 pgs: 20 creating+peering, 26 unknown, 86 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 3.7 KiB/s rd, 1.2 KiB/s wr, 6 op/s 2026-03-09T21:16:18.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:18 vm07 bash[28052]: audit 2026-03-09T21:16:17.646334+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:18.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:18 vm07 bash[28052]: audit 2026-03-09T21:16:17.646334+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:18.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:18 vm07 bash[28052]: audit 2026-03-09T21:16:17.666820+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:18.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:18 vm07 bash[28052]: audit 2026-03-09T21:16:17.666820+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:18.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:18 vm07 bash[28052]: audit 2026-03-09T21:16:17.687669+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:18.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:18 vm07 bash[28052]: audit 2026-03-09T21:16:17.687669+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:18.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:18 vm07 bash[28052]: audit 2026-03-09T21:16:17.717191+0000 mon.a (mon.0) 686 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:18.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:18 vm07 bash[28052]: audit 2026-03-09T21:16:17.717191+0000 mon.a (mon.0) 686 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:18.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:18 vm07 bash[28052]: audit 2026-03-09T21:16:18.093299+0000 mon.a (mon.0) 687 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:18.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:18 vm07 bash[28052]: audit 2026-03-09T21:16:18.093299+0000 mon.a (mon.0) 687 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:18.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:18 vm07 bash[28052]: audit 2026-03-09T21:16:18.094004+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:18.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:18 vm07 bash[28052]: audit 2026-03-09T21:16:18.094004+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:18 vm10 bash[23387]: cluster 2026-03-09T21:16:16.631959+0000 mgr.y (mgr.14150) 250 : cluster [DBG] pgmap v232: 132 pgs: 20 creating+peering, 26 unknown, 86 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 3.7 KiB/s rd, 1.2 KiB/s wr, 6 op/s 2026-03-09T21:16:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:18 vm10 bash[23387]: cluster 2026-03-09T21:16:16.631959+0000 mgr.y (mgr.14150) 250 : cluster [DBG] pgmap v232: 132 pgs: 20 creating+peering, 26 unknown, 86 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 3.7 KiB/s rd, 1.2 KiB/s wr, 6 op/s 2026-03-09T21:16:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:18 vm10 bash[23387]: audit 2026-03-09T21:16:17.646334+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:18 vm10 bash[23387]: audit 2026-03-09T21:16:17.646334+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:18 vm10 bash[23387]: audit 2026-03-09T21:16:17.666820+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:18 vm10 bash[23387]: audit 2026-03-09T21:16:17.666820+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:18 vm10 bash[23387]: audit 2026-03-09T21:16:17.687669+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:18 vm10 bash[23387]: audit 2026-03-09T21:16:17.687669+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:18 vm10 bash[23387]: audit 2026-03-09T21:16:17.717191+0000 mon.a (mon.0) 686 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:18 vm10 bash[23387]: audit 2026-03-09T21:16:17.717191+0000 mon.a (mon.0) 686 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:18 vm10 bash[23387]: audit 2026-03-09T21:16:18.093299+0000 mon.a (mon.0) 687 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:18 vm10 bash[23387]: audit 2026-03-09T21:16:18.093299+0000 mon.a (mon.0) 687 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:18 vm10 bash[23387]: audit 2026-03-09T21:16:18.094004+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:18 vm10 bash[23387]: audit 2026-03-09T21:16:18.094004+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:19.503 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph orch apply iscsi datapool admin admin --trusted_ip_list 192.168.123.110 --placement '1;vm10=iscsi.a' 2026-03-09T21:16:19.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:19 vm10 bash[23387]: cephadm 2026-03-09T21:16:18.096639+0000 mgr.y (mgr.14150) 251 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T21:16:19.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:19 vm10 bash[23387]: cephadm 2026-03-09T21:16:18.096639+0000 mgr.y (mgr.14150) 251 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T21:16:19.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:19 vm10 bash[23387]: cluster 2026-03-09T21:16:18.298013+0000 mon.a (mon.0) 689 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-09T21:16:19.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:19 vm10 bash[23387]: cluster 2026-03-09T21:16:18.298013+0000 mon.a (mon.0) 689 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-09T21:16:19.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:19 vm10 bash[23387]: audit 2026-03-09T21:16:18.365136+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:19.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:19 vm10 bash[23387]: audit 2026-03-09T21:16:18.365136+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:19.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:19 vm07 bash[20771]: cephadm 2026-03-09T21:16:18.096639+0000 mgr.y (mgr.14150) 251 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T21:16:19.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:19 vm07 bash[20771]: cephadm 2026-03-09T21:16:18.096639+0000 mgr.y (mgr.14150) 251 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T21:16:19.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:19 vm07 bash[20771]: cluster 2026-03-09T21:16:18.298013+0000 mon.a (mon.0) 689 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-09T21:16:19.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:19 vm07 bash[20771]: cluster 2026-03-09T21:16:18.298013+0000 mon.a (mon.0) 689 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-09T21:16:19.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:19 vm07 bash[20771]: audit 2026-03-09T21:16:18.365136+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:19.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:19 vm07 bash[20771]: audit 2026-03-09T21:16:18.365136+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:19.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:19 vm07 bash[28052]: cephadm 2026-03-09T21:16:18.096639+0000 mgr.y (mgr.14150) 251 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T21:16:19.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:19 vm07 bash[28052]: cephadm 2026-03-09T21:16:18.096639+0000 mgr.y (mgr.14150) 251 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T21:16:19.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:19 vm07 bash[28052]: cluster 2026-03-09T21:16:18.298013+0000 mon.a (mon.0) 689 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-09T21:16:19.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:19 vm07 bash[28052]: cluster 2026-03-09T21:16:18.298013+0000 mon.a (mon.0) 689 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-09T21:16:19.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:19 vm07 bash[28052]: audit 2026-03-09T21:16:18.365136+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:19.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:19 vm07 bash[28052]: audit 2026-03-09T21:16:18.365136+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:20.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:20 vm10 bash[23387]: cluster 2026-03-09T21:16:18.632432+0000 mgr.y (mgr.14150) 252 : cluster [DBG] pgmap v235: 132 pgs: 17 creating+peering, 4 unknown, 111 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 32 KiB/s rd, 3.5 KiB/s wr, 75 op/s 2026-03-09T21:16:20.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:20 vm10 bash[23387]: cluster 2026-03-09T21:16:18.632432+0000 mgr.y (mgr.14150) 252 : cluster [DBG] pgmap v235: 132 pgs: 17 creating+peering, 4 unknown, 111 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 32 KiB/s rd, 3.5 KiB/s wr, 75 op/s 2026-03-09T21:16:20.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:20 vm10 bash[23387]: cluster 2026-03-09T21:16:19.362383+0000 mon.a (mon.0) 691 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-09T21:16:20.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:20 vm10 bash[23387]: cluster 2026-03-09T21:16:19.362383+0000 mon.a (mon.0) 691 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-09T21:16:20.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:20 vm10 bash[23387]: cluster 2026-03-09T21:16:19.362404+0000 mon.a (mon.0) 692 : cluster [INF] Cluster is now healthy 2026-03-09T21:16:20.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:20 vm10 bash[23387]: cluster 2026-03-09T21:16:19.362404+0000 mon.a (mon.0) 692 : cluster [INF] Cluster is now healthy 2026-03-09T21:16:20.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:20 vm10 bash[23387]: cluster 2026-03-09T21:16:19.405432+0000 mon.a (mon.0) 693 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-09T21:16:20.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:20 vm10 bash[23387]: cluster 2026-03-09T21:16:19.405432+0000 mon.a (mon.0) 693 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-09T21:16:20.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:20 vm07 bash[20771]: cluster 2026-03-09T21:16:18.632432+0000 mgr.y (mgr.14150) 252 : cluster [DBG] pgmap v235: 132 pgs: 17 creating+peering, 4 unknown, 111 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 32 KiB/s rd, 3.5 KiB/s wr, 75 op/s 2026-03-09T21:16:20.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:20 vm07 bash[20771]: cluster 2026-03-09T21:16:18.632432+0000 mgr.y (mgr.14150) 252 : cluster [DBG] pgmap v235: 132 pgs: 17 creating+peering, 4 unknown, 111 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 32 KiB/s rd, 3.5 KiB/s wr, 75 op/s 2026-03-09T21:16:20.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:20 vm07 bash[20771]: cluster 2026-03-09T21:16:19.362383+0000 mon.a (mon.0) 691 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-09T21:16:20.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:20 vm07 bash[20771]: cluster 2026-03-09T21:16:19.362383+0000 mon.a (mon.0) 691 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-09T21:16:20.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:20 vm07 bash[20771]: cluster 2026-03-09T21:16:19.362404+0000 mon.a (mon.0) 692 : cluster [INF] Cluster is now healthy 2026-03-09T21:16:20.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:20 vm07 bash[20771]: cluster 2026-03-09T21:16:19.362404+0000 mon.a (mon.0) 692 : cluster [INF] Cluster is now healthy 2026-03-09T21:16:20.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:20 vm07 bash[20771]: cluster 2026-03-09T21:16:19.405432+0000 mon.a (mon.0) 693 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-09T21:16:20.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:20 vm07 bash[20771]: cluster 2026-03-09T21:16:19.405432+0000 mon.a (mon.0) 693 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-09T21:16:20.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:20 vm07 bash[28052]: cluster 2026-03-09T21:16:18.632432+0000 mgr.y (mgr.14150) 252 : cluster [DBG] pgmap v235: 132 pgs: 17 creating+peering, 4 unknown, 111 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 32 KiB/s rd, 3.5 KiB/s wr, 75 op/s 2026-03-09T21:16:20.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:20 vm07 bash[28052]: cluster 2026-03-09T21:16:18.632432+0000 mgr.y (mgr.14150) 252 : cluster [DBG] pgmap v235: 132 pgs: 17 creating+peering, 4 unknown, 111 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 32 KiB/s rd, 3.5 KiB/s wr, 75 op/s 2026-03-09T21:16:20.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:20 vm07 bash[28052]: cluster 2026-03-09T21:16:19.362383+0000 mon.a (mon.0) 691 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-09T21:16:20.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:20 vm07 bash[28052]: cluster 2026-03-09T21:16:19.362383+0000 mon.a (mon.0) 691 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-09T21:16:20.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:20 vm07 bash[28052]: cluster 2026-03-09T21:16:19.362404+0000 mon.a (mon.0) 692 : cluster [INF] Cluster is now healthy 2026-03-09T21:16:20.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:20 vm07 bash[28052]: cluster 2026-03-09T21:16:19.362404+0000 mon.a (mon.0) 692 : cluster [INF] Cluster is now healthy 2026-03-09T21:16:20.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:20 vm07 bash[28052]: cluster 2026-03-09T21:16:19.405432+0000 mon.a (mon.0) 693 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-09T21:16:20.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:20 vm07 bash[28052]: cluster 2026-03-09T21:16:19.405432+0000 mon.a (mon.0) 693 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-09T21:16:22.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:22 vm07 bash[20771]: cluster 2026-03-09T21:16:20.632872+0000 mgr.y (mgr.14150) 253 : cluster [DBG] pgmap v237: 132 pgs: 17 creating+peering, 115 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 45 KiB/s rd, 3.9 KiB/s wr, 105 op/s 2026-03-09T21:16:22.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:22 vm07 bash[20771]: cluster 2026-03-09T21:16:20.632872+0000 mgr.y (mgr.14150) 253 : cluster [DBG] pgmap v237: 132 pgs: 17 creating+peering, 115 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 45 KiB/s rd, 3.9 KiB/s wr, 105 op/s 2026-03-09T21:16:22.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:22 vm07 bash[28052]: cluster 2026-03-09T21:16:20.632872+0000 mgr.y (mgr.14150) 253 : cluster [DBG] pgmap v237: 132 pgs: 17 creating+peering, 115 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 45 KiB/s rd, 3.9 KiB/s wr, 105 op/s 2026-03-09T21:16:22.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:22 vm07 bash[28052]: cluster 2026-03-09T21:16:20.632872+0000 mgr.y (mgr.14150) 253 : cluster [DBG] pgmap v237: 132 pgs: 17 creating+peering, 115 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 45 KiB/s rd, 3.9 KiB/s wr, 105 op/s 2026-03-09T21:16:22.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:22 vm10 bash[23387]: cluster 2026-03-09T21:16:20.632872+0000 mgr.y (mgr.14150) 253 : cluster [DBG] pgmap v237: 132 pgs: 17 creating+peering, 115 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 45 KiB/s rd, 3.9 KiB/s wr, 105 op/s 2026-03-09T21:16:22.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:22 vm10 bash[23387]: cluster 2026-03-09T21:16:20.632872+0000 mgr.y (mgr.14150) 253 : cluster [DBG] pgmap v237: 132 pgs: 17 creating+peering, 115 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 45 KiB/s rd, 3.9 KiB/s wr, 105 op/s 2026-03-09T21:16:24.148 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.b/config 2026-03-09T21:16:24.464 INFO:teuthology.orchestra.run.vm10.stdout:Scheduled iscsi.datapool update... 2026-03-09T21:16:24.553 INFO:tasks.cephadm:Distributing iscsi-gateway.cfg... 2026-03-09T21:16:24.553 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T21:16:24.553 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/ceph/iscsi-gateway.cfg 2026-03-09T21:16:24.569 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-09T21:16:24.569 DEBUG:teuthology.orchestra.run.vm10:> sudo dd of=/etc/ceph/iscsi-gateway.cfg 2026-03-09T21:16:24.578 DEBUG:teuthology.orchestra.run.vm10:iscsi.iscsi.a> sudo journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@iscsi.iscsi.a.service 2026-03-09T21:16:24.622 INFO:tasks.cephadm:Adding prometheus.a on vm10 2026-03-09T21:16:24.622 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph orch apply prometheus '1;vm10=a' 2026-03-09T21:16:24.808 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:24 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:24.808 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:24 vm10 bash[23387]: cluster 2026-03-09T21:16:22.633397+0000 mgr.y (mgr.14150) 254 : cluster [DBG] pgmap v238: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 6.0 KiB/s wr, 174 op/s 2026-03-09T21:16:24.808 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:24 vm10 bash[23387]: cluster 2026-03-09T21:16:22.633397+0000 mgr.y (mgr.14150) 254 : cluster [DBG] pgmap v238: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 6.0 KiB/s wr, 174 op/s 2026-03-09T21:16:24.808 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:24 vm10 bash[23387]: audit 2026-03-09T21:16:24.462974+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:24.808 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:24 vm10 bash[23387]: audit 2026-03-09T21:16:24.462974+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:24.808 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:24 vm10 bash[23387]: audit 2026-03-09T21:16:24.464076+0000 mon.a (mon.0) 695 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:24.808 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:24 vm10 bash[23387]: audit 2026-03-09T21:16:24.464076+0000 mon.a (mon.0) 695 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:24.808 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:24 vm10 bash[23387]: audit 2026-03-09T21:16:24.465840+0000 mon.a (mon.0) 696 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:24.808 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:24 vm10 bash[23387]: audit 2026-03-09T21:16:24.465840+0000 mon.a (mon.0) 696 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:24.808 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:24 vm10 bash[23387]: audit 2026-03-09T21:16:24.466416+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:24.808 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:24 vm10 bash[23387]: audit 2026-03-09T21:16:24.466416+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:24.808 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:24 vm10 bash[23387]: audit 2026-03-09T21:16:24.474156+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:24.808 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:24 vm10 bash[23387]: audit 2026-03-09T21:16:24.474156+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:24.809 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:24 vm10 bash[23387]: audit 2026-03-09T21:16:24.476314+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T21:16:24.809 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:24 vm10 bash[23387]: audit 2026-03-09T21:16:24.476314+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T21:16:24.809 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:24 vm10 bash[23387]: audit 2026-03-09T21:16:24.481751+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T21:16:24.809 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:24 vm10 bash[23387]: audit 2026-03-09T21:16:24.481751+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T21:16:24.809 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:24 vm10 bash[23387]: audit 2026-03-09T21:16:24.490046+0000 mon.a (mon.0) 701 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:24.809 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:24 vm10 bash[23387]: audit 2026-03-09T21:16:24.490046+0000 mon.a (mon.0) 701 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:24.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:24 vm07 bash[28052]: cluster 2026-03-09T21:16:22.633397+0000 mgr.y (mgr.14150) 254 : cluster [DBG] pgmap v238: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 6.0 KiB/s wr, 174 op/s 2026-03-09T21:16:24.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:24 vm07 bash[28052]: cluster 2026-03-09T21:16:22.633397+0000 mgr.y (mgr.14150) 254 : cluster [DBG] pgmap v238: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 6.0 KiB/s wr, 174 op/s 2026-03-09T21:16:24.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:24 vm07 bash[28052]: audit 2026-03-09T21:16:24.462974+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:24.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:24 vm07 bash[28052]: audit 2026-03-09T21:16:24.462974+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:24.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:24 vm07 bash[28052]: audit 2026-03-09T21:16:24.464076+0000 mon.a (mon.0) 695 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:24.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:24 vm07 bash[28052]: audit 2026-03-09T21:16:24.464076+0000 mon.a (mon.0) 695 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:24.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:24 vm07 bash[28052]: audit 2026-03-09T21:16:24.465840+0000 mon.a (mon.0) 696 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:24.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:24 vm07 bash[28052]: audit 2026-03-09T21:16:24.465840+0000 mon.a (mon.0) 696 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:24.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:24 vm07 bash[28052]: audit 2026-03-09T21:16:24.466416+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:24.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:24 vm07 bash[28052]: audit 2026-03-09T21:16:24.466416+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:24.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:24 vm07 bash[28052]: audit 2026-03-09T21:16:24.474156+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:24.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:24 vm07 bash[28052]: audit 2026-03-09T21:16:24.474156+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:24.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:24 vm07 bash[28052]: audit 2026-03-09T21:16:24.476314+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T21:16:24.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:24 vm07 bash[28052]: audit 2026-03-09T21:16:24.476314+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T21:16:24.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:24 vm07 bash[28052]: audit 2026-03-09T21:16:24.481751+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T21:16:24.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:24 vm07 bash[28052]: audit 2026-03-09T21:16:24.481751+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T21:16:24.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:24 vm07 bash[28052]: audit 2026-03-09T21:16:24.490046+0000 mon.a (mon.0) 701 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:24.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:24 vm07 bash[28052]: audit 2026-03-09T21:16:24.490046+0000 mon.a (mon.0) 701 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:24.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:24 vm07 bash[20771]: cluster 2026-03-09T21:16:22.633397+0000 mgr.y (mgr.14150) 254 : cluster [DBG] pgmap v238: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 6.0 KiB/s wr, 174 op/s 2026-03-09T21:16:24.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:24 vm07 bash[20771]: cluster 2026-03-09T21:16:22.633397+0000 mgr.y (mgr.14150) 254 : cluster [DBG] pgmap v238: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 6.0 KiB/s wr, 174 op/s 2026-03-09T21:16:24.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:24 vm07 bash[20771]: audit 2026-03-09T21:16:24.462974+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:24.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:24 vm07 bash[20771]: audit 2026-03-09T21:16:24.462974+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:24.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:24 vm07 bash[20771]: audit 2026-03-09T21:16:24.464076+0000 mon.a (mon.0) 695 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:24.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:24 vm07 bash[20771]: audit 2026-03-09T21:16:24.464076+0000 mon.a (mon.0) 695 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:24.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:24 vm07 bash[20771]: audit 2026-03-09T21:16:24.465840+0000 mon.a (mon.0) 696 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:24.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:24 vm07 bash[20771]: audit 2026-03-09T21:16:24.465840+0000 mon.a (mon.0) 696 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:24.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:24 vm07 bash[20771]: audit 2026-03-09T21:16:24.466416+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:24.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:24 vm07 bash[20771]: audit 2026-03-09T21:16:24.466416+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:24.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:24 vm07 bash[20771]: audit 2026-03-09T21:16:24.474156+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:24.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:24 vm07 bash[20771]: audit 2026-03-09T21:16:24.474156+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:24.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:24 vm07 bash[20771]: audit 2026-03-09T21:16:24.476314+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T21:16:24.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:24 vm07 bash[20771]: audit 2026-03-09T21:16:24.476314+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T21:16:24.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:24 vm07 bash[20771]: audit 2026-03-09T21:16:24.481751+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T21:16:24.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:24 vm07 bash[20771]: audit 2026-03-09T21:16:24.481751+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T21:16:24.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:24 vm07 bash[20771]: audit 2026-03-09T21:16:24.490046+0000 mon.a (mon.0) 701 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:24.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:24 vm07 bash[20771]: audit 2026-03-09T21:16:24.490046+0000 mon.a (mon.0) 701 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:25.386 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:25.386 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:25.386 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:16:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:25.386 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:16:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:25.386 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 09 21:16:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:25.386 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:16:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:25.387 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:25.387 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:25.387 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:25.387 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:25.387 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:25.387 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:25.387 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:25.667 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:25.667 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:25 vm10 bash[23387]: audit 2026-03-09T21:16:24.455635+0000 mgr.y (mgr.14150) 255 : audit [DBG] from='client.24374 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.110", "placement": "1;vm10=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:16:25.667 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:25 vm10 bash[23387]: audit 2026-03-09T21:16:24.455635+0000 mgr.y (mgr.14150) 255 : audit [DBG] from='client.24374 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.110", "placement": "1;vm10=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:16:25.667 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:25 vm10 bash[23387]: cephadm 2026-03-09T21:16:24.457092+0000 mgr.y (mgr.14150) 256 : cephadm [INF] Saving service iscsi.datapool spec with placement vm10=iscsi.a;count:1 2026-03-09T21:16:25.667 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:25 vm10 bash[23387]: cephadm 2026-03-09T21:16:24.457092+0000 mgr.y (mgr.14150) 256 : cephadm [INF] Saving service iscsi.datapool spec with placement vm10=iscsi.a;count:1 2026-03-09T21:16:25.667 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:25 vm10 bash[23387]: cephadm 2026-03-09T21:16:24.491001+0000 mgr.y (mgr.14150) 257 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm10 2026-03-09T21:16:25.667 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:25 vm10 bash[23387]: cephadm 2026-03-09T21:16:24.491001+0000 mgr.y (mgr.14150) 257 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm10 2026-03-09T21:16:25.667 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:25 vm10 bash[23387]: audit 2026-03-09T21:16:25.514961+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:25.667 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:25 vm10 bash[23387]: audit 2026-03-09T21:16:25.514961+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:25.667 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:25 vm10 bash[23387]: audit 2026-03-09T21:16:25.523475+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:25.667 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:25 vm10 bash[23387]: audit 2026-03-09T21:16:25.523475+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:25.667 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:25.667 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:16:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:25.667 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:16:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:25.668 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 09 21:16:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:25.668 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:16:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:25.668 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:25.668 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:25 vm10 systemd[1]: Started Ceph iscsi.iscsi.a for 22c897f4-1bfc-11f1-adaa-13127443f8b3. 2026-03-09T21:16:25.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:25 vm07 bash[20771]: audit 2026-03-09T21:16:24.455635+0000 mgr.y (mgr.14150) 255 : audit [DBG] from='client.24374 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.110", "placement": "1;vm10=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:16:25.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:25 vm07 bash[20771]: audit 2026-03-09T21:16:24.455635+0000 mgr.y (mgr.14150) 255 : audit [DBG] from='client.24374 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.110", "placement": "1;vm10=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:16:25.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:25 vm07 bash[20771]: cephadm 2026-03-09T21:16:24.457092+0000 mgr.y (mgr.14150) 256 : cephadm [INF] Saving service iscsi.datapool spec with placement vm10=iscsi.a;count:1 2026-03-09T21:16:25.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:25 vm07 bash[20771]: cephadm 2026-03-09T21:16:24.457092+0000 mgr.y (mgr.14150) 256 : cephadm [INF] Saving service iscsi.datapool spec with placement vm10=iscsi.a;count:1 2026-03-09T21:16:25.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:25 vm07 bash[20771]: cephadm 2026-03-09T21:16:24.491001+0000 mgr.y (mgr.14150) 257 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm10 2026-03-09T21:16:25.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:25 vm07 bash[20771]: cephadm 2026-03-09T21:16:24.491001+0000 mgr.y (mgr.14150) 257 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm10 2026-03-09T21:16:25.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:25 vm07 bash[20771]: audit 2026-03-09T21:16:25.514961+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:25.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:25 vm07 bash[20771]: audit 2026-03-09T21:16:25.514961+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:25.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:25 vm07 bash[20771]: audit 2026-03-09T21:16:25.523475+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:25.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:25 vm07 bash[20771]: audit 2026-03-09T21:16:25.523475+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:25.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:25 vm07 bash[28052]: audit 2026-03-09T21:16:24.455635+0000 mgr.y (mgr.14150) 255 : audit [DBG] from='client.24374 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.110", "placement": "1;vm10=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:16:25.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:25 vm07 bash[28052]: audit 2026-03-09T21:16:24.455635+0000 mgr.y (mgr.14150) 255 : audit [DBG] from='client.24374 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.110", "placement": "1;vm10=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:16:25.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:25 vm07 bash[28052]: cephadm 2026-03-09T21:16:24.457092+0000 mgr.y (mgr.14150) 256 : cephadm [INF] Saving service iscsi.datapool spec with placement vm10=iscsi.a;count:1 2026-03-09T21:16:25.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:25 vm07 bash[28052]: cephadm 2026-03-09T21:16:24.457092+0000 mgr.y (mgr.14150) 256 : cephadm [INF] Saving service iscsi.datapool spec with placement vm10=iscsi.a;count:1 2026-03-09T21:16:25.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:25 vm07 bash[28052]: cephadm 2026-03-09T21:16:24.491001+0000 mgr.y (mgr.14150) 257 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm10 2026-03-09T21:16:25.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:25 vm07 bash[28052]: cephadm 2026-03-09T21:16:24.491001+0000 mgr.y (mgr.14150) 257 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm10 2026-03-09T21:16:25.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:25 vm07 bash[28052]: audit 2026-03-09T21:16:25.514961+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:25.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:25 vm07 bash[28052]: audit 2026-03-09T21:16:25.514961+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:25.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:25 vm07 bash[28052]: audit 2026-03-09T21:16:25.523475+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:25.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:25 vm07 bash[28052]: audit 2026-03-09T21:16:25.523475+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:26.442 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:26 vm10 bash[48970]: debug Started the configuration object watcher 2026-03-09T21:16:26.443 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:26 vm10 bash[48970]: debug Checking for config object changes every 1s 2026-03-09T21:16:26.443 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:26 vm10 bash[48970]: debug Processing osd blocklist entries for this node 2026-03-09T21:16:26.443 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:26 vm10 bash[48970]: debug Reading the configuration object to update local LIO configuration 2026-03-09T21:16:26.443 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:26 vm10 bash[48970]: debug Configuration does not have an entry for this host(vm10.local) - nothing to define to LIO 2026-03-09T21:16:26.443 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:26 vm10 bash[48970]: * Serving Flask app 'rbd-target-api' (lazy loading) 2026-03-09T21:16:26.443 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:26 vm10 bash[48970]: * Environment: production 2026-03-09T21:16:26.443 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:26 vm10 bash[48970]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-09T21:16:26.443 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:26 vm10 bash[48970]: Use a production WSGI server instead. 2026-03-09T21:16:26.443 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:26 vm10 bash[48970]: * Debug mode: off 2026-03-09T21:16:26.443 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:26 vm10 bash[48970]: debug * Running on all addresses. 2026-03-09T21:16:26.443 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:26 vm10 bash[48970]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-09T21:16:26.443 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:26 vm10 bash[48970]: * Running on all addresses. 2026-03-09T21:16:26.443 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:26 vm10 bash[48970]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-09T21:16:26.443 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:26 vm10 bash[48970]: debug * Running on http://[::1]:5000/ (Press CTRL+C to quit) 2026-03-09T21:16:26.443 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:26 vm10 bash[48970]: * Running on http://[::1]:5000/ (Press CTRL+C to quit) 2026-03-09T21:16:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:26 vm07 bash[20771]: cluster 2026-03-09T21:16:24.633861+0000 mgr.y (mgr.14150) 258 : cluster [DBG] pgmap v239: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 65 KiB/s rd, 4.9 KiB/s wr, 153 op/s 2026-03-09T21:16:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:26 vm07 bash[20771]: cluster 2026-03-09T21:16:24.633861+0000 mgr.y (mgr.14150) 258 : cluster [DBG] pgmap v239: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 65 KiB/s rd, 4.9 KiB/s wr, 153 op/s 2026-03-09T21:16:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:26 vm07 bash[20771]: audit 2026-03-09T21:16:25.534783+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:26 vm07 bash[20771]: audit 2026-03-09T21:16:25.534783+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:26 vm07 bash[20771]: cephadm 2026-03-09T21:16:25.535694+0000 mgr.y (mgr.14150) 259 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T21:16:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:26 vm07 bash[20771]: cephadm 2026-03-09T21:16:25.535694+0000 mgr.y (mgr.14150) 259 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T21:16:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:26 vm07 bash[20771]: audit 2026-03-09T21:16:25.554937+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:26 vm07 bash[20771]: audit 2026-03-09T21:16:25.554937+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:26 vm07 bash[20771]: audit 2026-03-09T21:16:25.569820+0000 mon.a (mon.0) 706 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:26 vm07 bash[20771]: audit 2026-03-09T21:16:25.569820+0000 mon.a (mon.0) 706 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:26 vm07 bash[20771]: audit 2026-03-09T21:16:26.348420+0000 mon.b (mon.1) 31 : audit [DBG] from='client.? 192.168.123.110:0/2597144663' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T21:16:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:26 vm07 bash[20771]: audit 2026-03-09T21:16:26.348420+0000 mon.b (mon.1) 31 : audit [DBG] from='client.? 192.168.123.110:0/2597144663' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T21:16:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:26 vm07 bash[28052]: cluster 2026-03-09T21:16:24.633861+0000 mgr.y (mgr.14150) 258 : cluster [DBG] pgmap v239: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 65 KiB/s rd, 4.9 KiB/s wr, 153 op/s 2026-03-09T21:16:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:26 vm07 bash[28052]: cluster 2026-03-09T21:16:24.633861+0000 mgr.y (mgr.14150) 258 : cluster [DBG] pgmap v239: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 65 KiB/s rd, 4.9 KiB/s wr, 153 op/s 2026-03-09T21:16:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:26 vm07 bash[28052]: audit 2026-03-09T21:16:25.534783+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:26 vm07 bash[28052]: audit 2026-03-09T21:16:25.534783+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:26 vm07 bash[28052]: cephadm 2026-03-09T21:16:25.535694+0000 mgr.y (mgr.14150) 259 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T21:16:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:26 vm07 bash[28052]: cephadm 2026-03-09T21:16:25.535694+0000 mgr.y (mgr.14150) 259 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T21:16:26.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:26 vm07 bash[28052]: audit 2026-03-09T21:16:25.554937+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:26.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:26 vm07 bash[28052]: audit 2026-03-09T21:16:25.554937+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:26.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:26 vm07 bash[28052]: audit 2026-03-09T21:16:25.569820+0000 mon.a (mon.0) 706 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:26.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:26 vm07 bash[28052]: audit 2026-03-09T21:16:25.569820+0000 mon.a (mon.0) 706 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:26.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:26 vm07 bash[28052]: audit 2026-03-09T21:16:26.348420+0000 mon.b (mon.1) 31 : audit [DBG] from='client.? 192.168.123.110:0/2597144663' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T21:16:26.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:26 vm07 bash[28052]: audit 2026-03-09T21:16:26.348420+0000 mon.b (mon.1) 31 : audit [DBG] from='client.? 192.168.123.110:0/2597144663' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T21:16:26.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:26 vm10 bash[23387]: cluster 2026-03-09T21:16:24.633861+0000 mgr.y (mgr.14150) 258 : cluster [DBG] pgmap v239: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 65 KiB/s rd, 4.9 KiB/s wr, 153 op/s 2026-03-09T21:16:26.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:26 vm10 bash[23387]: cluster 2026-03-09T21:16:24.633861+0000 mgr.y (mgr.14150) 258 : cluster [DBG] pgmap v239: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 65 KiB/s rd, 4.9 KiB/s wr, 153 op/s 2026-03-09T21:16:26.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:26 vm10 bash[23387]: audit 2026-03-09T21:16:25.534783+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:26.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:26 vm10 bash[23387]: audit 2026-03-09T21:16:25.534783+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:26.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:26 vm10 bash[23387]: cephadm 2026-03-09T21:16:25.535694+0000 mgr.y (mgr.14150) 259 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T21:16:26.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:26 vm10 bash[23387]: cephadm 2026-03-09T21:16:25.535694+0000 mgr.y (mgr.14150) 259 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T21:16:26.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:26 vm10 bash[23387]: audit 2026-03-09T21:16:25.554937+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:26.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:26 vm10 bash[23387]: audit 2026-03-09T21:16:25.554937+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:26.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:26 vm10 bash[23387]: audit 2026-03-09T21:16:25.569820+0000 mon.a (mon.0) 706 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:26.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:26 vm10 bash[23387]: audit 2026-03-09T21:16:25.569820+0000 mon.a (mon.0) 706 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:26.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:26 vm10 bash[23387]: audit 2026-03-09T21:16:26.348420+0000 mon.b (mon.1) 31 : audit [DBG] from='client.? 192.168.123.110:0/2597144663' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T21:16:26.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:26 vm10 bash[23387]: audit 2026-03-09T21:16:26.348420+0000 mon.b (mon.1) 31 : audit [DBG] from='client.? 192.168.123.110:0/2597144663' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T21:16:27.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:27 vm07 bash[28052]: cluster 2026-03-09T21:16:26.634382+0000 mgr.y (mgr.14150) 260 : cluster [DBG] pgmap v240: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 43 KiB/s rd, 2.8 KiB/s wr, 101 op/s 2026-03-09T21:16:27.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:27 vm07 bash[28052]: cluster 2026-03-09T21:16:26.634382+0000 mgr.y (mgr.14150) 260 : cluster [DBG] pgmap v240: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 43 KiB/s rd, 2.8 KiB/s wr, 101 op/s 2026-03-09T21:16:27.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:27 vm07 bash[20771]: cluster 2026-03-09T21:16:26.634382+0000 mgr.y (mgr.14150) 260 : cluster [DBG] pgmap v240: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 43 KiB/s rd, 2.8 KiB/s wr, 101 op/s 2026-03-09T21:16:27.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:27 vm07 bash[20771]: cluster 2026-03-09T21:16:26.634382+0000 mgr.y (mgr.14150) 260 : cluster [DBG] pgmap v240: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 43 KiB/s rd, 2.8 KiB/s wr, 101 op/s 2026-03-09T21:16:27.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:27 vm10 bash[23387]: cluster 2026-03-09T21:16:26.634382+0000 mgr.y (mgr.14150) 260 : cluster [DBG] pgmap v240: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 43 KiB/s rd, 2.8 KiB/s wr, 101 op/s 2026-03-09T21:16:27.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:27 vm10 bash[23387]: cluster 2026-03-09T21:16:26.634382+0000 mgr.y (mgr.14150) 260 : cluster [DBG] pgmap v240: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 43 KiB/s rd, 2.8 KiB/s wr, 101 op/s 2026-03-09T21:16:28.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:28 vm07 bash[20771]: cluster 2026-03-09T21:16:27.570881+0000 mon.a (mon.0) 707 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-09T21:16:28.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:28 vm07 bash[20771]: cluster 2026-03-09T21:16:27.570881+0000 mon.a (mon.0) 707 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-09T21:16:28.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:28 vm07 bash[20771]: audit 2026-03-09T21:16:27.649905+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:28.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:28 vm07 bash[20771]: audit 2026-03-09T21:16:27.649905+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:28.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:28 vm07 bash[28052]: cluster 2026-03-09T21:16:27.570881+0000 mon.a (mon.0) 707 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-09T21:16:28.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:28 vm07 bash[28052]: cluster 2026-03-09T21:16:27.570881+0000 mon.a (mon.0) 707 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-09T21:16:28.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:28 vm07 bash[28052]: audit 2026-03-09T21:16:27.649905+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:28.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:28 vm07 bash[28052]: audit 2026-03-09T21:16:27.649905+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:28.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:28 vm10 bash[23387]: cluster 2026-03-09T21:16:27.570881+0000 mon.a (mon.0) 707 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-09T21:16:28.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:28 vm10 bash[23387]: cluster 2026-03-09T21:16:27.570881+0000 mon.a (mon.0) 707 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-09T21:16:28.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:28 vm10 bash[23387]: audit 2026-03-09T21:16:27.649905+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:28.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:28 vm10 bash[23387]: audit 2026-03-09T21:16:27.649905+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:29.321 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.b/config 2026-03-09T21:16:29.627 INFO:teuthology.orchestra.run.vm10.stdout:Scheduled prometheus update... 2026-03-09T21:16:29.692 DEBUG:teuthology.orchestra.run.vm10:prometheus.a> sudo journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@prometheus.a.service 2026-03-09T21:16:29.694 INFO:tasks.cephadm:Adding node-exporter.a on vm07 2026-03-09T21:16:29.694 INFO:tasks.cephadm:Adding node-exporter.b on vm10 2026-03-09T21:16:29.694 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph orch apply node-exporter '2;vm07=a;vm10=b' 2026-03-09T21:16:30.341 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:29 vm10 bash[23387]: cluster 2026-03-09T21:16:28.634982+0000 mgr.y (mgr.14150) 261 : cluster [DBG] pgmap v241: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 35 KiB/s rd, 2.3 KiB/s wr, 84 op/s 2026-03-09T21:16:30.342 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:29 vm10 bash[23387]: cluster 2026-03-09T21:16:28.634982+0000 mgr.y (mgr.14150) 261 : cluster [DBG] pgmap v241: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 35 KiB/s rd, 2.3 KiB/s wr, 84 op/s 2026-03-09T21:16:30.342 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:29 vm10 bash[23387]: cluster 2026-03-09T21:16:29.001889+0000 mon.a (mon.0) 709 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-09T21:16:30.342 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:29 vm10 bash[23387]: cluster 2026-03-09T21:16:29.001889+0000 mon.a (mon.0) 709 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-09T21:16:30.342 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:29 vm10 bash[23387]: audit 2026-03-09T21:16:29.626438+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:30.342 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:29 vm10 bash[23387]: audit 2026-03-09T21:16:29.626438+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:30.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:29 vm07 bash[20771]: cluster 2026-03-09T21:16:28.634982+0000 mgr.y (mgr.14150) 261 : cluster [DBG] pgmap v241: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 35 KiB/s rd, 2.3 KiB/s wr, 84 op/s 2026-03-09T21:16:30.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:29 vm07 bash[20771]: cluster 2026-03-09T21:16:28.634982+0000 mgr.y (mgr.14150) 261 : cluster [DBG] pgmap v241: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 35 KiB/s rd, 2.3 KiB/s wr, 84 op/s 2026-03-09T21:16:30.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:29 vm07 bash[20771]: cluster 2026-03-09T21:16:29.001889+0000 mon.a (mon.0) 709 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-09T21:16:30.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:29 vm07 bash[20771]: cluster 2026-03-09T21:16:29.001889+0000 mon.a (mon.0) 709 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-09T21:16:30.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:29 vm07 bash[20771]: audit 2026-03-09T21:16:29.626438+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:30.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:29 vm07 bash[20771]: audit 2026-03-09T21:16:29.626438+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:30.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:29 vm07 bash[28052]: cluster 2026-03-09T21:16:28.634982+0000 mgr.y (mgr.14150) 261 : cluster [DBG] pgmap v241: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 35 KiB/s rd, 2.3 KiB/s wr, 84 op/s 2026-03-09T21:16:30.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:29 vm07 bash[28052]: cluster 2026-03-09T21:16:28.634982+0000 mgr.y (mgr.14150) 261 : cluster [DBG] pgmap v241: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 35 KiB/s rd, 2.3 KiB/s wr, 84 op/s 2026-03-09T21:16:30.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:29 vm07 bash[28052]: cluster 2026-03-09T21:16:29.001889+0000 mon.a (mon.0) 709 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-09T21:16:30.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:29 vm07 bash[28052]: cluster 2026-03-09T21:16:29.001889+0000 mon.a (mon.0) 709 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-09T21:16:30.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:29 vm07 bash[28052]: audit 2026-03-09T21:16:29.626438+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:30.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:29 vm07 bash[28052]: audit 2026-03-09T21:16:29.626438+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:31.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:31 vm10 bash[23387]: audit 2026-03-09T21:16:29.619501+0000 mgr.y (mgr.14150) 262 : audit [DBG] from='client.24415 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm10=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:16:31.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:31 vm10 bash[23387]: audit 2026-03-09T21:16:29.619501+0000 mgr.y (mgr.14150) 262 : audit [DBG] from='client.24415 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm10=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:16:31.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:31 vm10 bash[23387]: cephadm 2026-03-09T21:16:29.620792+0000 mgr.y (mgr.14150) 263 : cephadm [INF] Saving service prometheus spec with placement vm10=a;count:1 2026-03-09T21:16:31.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:31 vm10 bash[23387]: cephadm 2026-03-09T21:16:29.620792+0000 mgr.y (mgr.14150) 263 : cephadm [INF] Saving service prometheus spec with placement vm10=a;count:1 2026-03-09T21:16:31.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:31 vm10 bash[23387]: audit 2026-03-09T21:16:30.660433+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:31.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:31 vm10 bash[23387]: audit 2026-03-09T21:16:30.660433+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:31.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:31 vm10 bash[23387]: audit 2026-03-09T21:16:30.666282+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:31.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:31 vm10 bash[23387]: audit 2026-03-09T21:16:30.666282+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:31.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:31 vm10 bash[23387]: audit 2026-03-09T21:16:30.667243+0000 mon.a (mon.0) 713 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:31.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:31 vm10 bash[23387]: audit 2026-03-09T21:16:30.667243+0000 mon.a (mon.0) 713 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:31.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:31 vm10 bash[23387]: audit 2026-03-09T21:16:30.667825+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:31.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:31 vm10 bash[23387]: audit 2026-03-09T21:16:30.667825+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:31.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:31 vm10 bash[23387]: audit 2026-03-09T21:16:30.673557+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:31.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:31 vm10 bash[23387]: audit 2026-03-09T21:16:30.673557+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:31.443 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:16:31 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:31.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:31 vm07 bash[20771]: audit 2026-03-09T21:16:29.619501+0000 mgr.y (mgr.14150) 262 : audit [DBG] from='client.24415 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm10=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:16:31.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:31 vm07 bash[20771]: audit 2026-03-09T21:16:29.619501+0000 mgr.y (mgr.14150) 262 : audit [DBG] from='client.24415 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm10=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:16:31.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:31 vm07 bash[20771]: cephadm 2026-03-09T21:16:29.620792+0000 mgr.y (mgr.14150) 263 : cephadm [INF] Saving service prometheus spec with placement vm10=a;count:1 2026-03-09T21:16:31.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:31 vm07 bash[20771]: cephadm 2026-03-09T21:16:29.620792+0000 mgr.y (mgr.14150) 263 : cephadm [INF] Saving service prometheus spec with placement vm10=a;count:1 2026-03-09T21:16:31.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:31 vm07 bash[20771]: audit 2026-03-09T21:16:30.660433+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:31.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:31 vm07 bash[20771]: audit 2026-03-09T21:16:30.660433+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:31.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:31 vm07 bash[20771]: audit 2026-03-09T21:16:30.666282+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:31.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:31 vm07 bash[20771]: audit 2026-03-09T21:16:30.666282+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:31.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:31 vm07 bash[20771]: audit 2026-03-09T21:16:30.667243+0000 mon.a (mon.0) 713 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:31.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:31 vm07 bash[20771]: audit 2026-03-09T21:16:30.667243+0000 mon.a (mon.0) 713 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:31.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:31 vm07 bash[20771]: audit 2026-03-09T21:16:30.667825+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:31.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:31 vm07 bash[20771]: audit 2026-03-09T21:16:30.667825+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:31.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:31 vm07 bash[20771]: audit 2026-03-09T21:16:30.673557+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:31.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:31 vm07 bash[20771]: audit 2026-03-09T21:16:30.673557+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:31.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:31 vm07 bash[28052]: audit 2026-03-09T21:16:29.619501+0000 mgr.y (mgr.14150) 262 : audit [DBG] from='client.24415 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm10=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:16:31.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:31 vm07 bash[28052]: audit 2026-03-09T21:16:29.619501+0000 mgr.y (mgr.14150) 262 : audit [DBG] from='client.24415 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm10=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:16:31.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:31 vm07 bash[28052]: cephadm 2026-03-09T21:16:29.620792+0000 mgr.y (mgr.14150) 263 : cephadm [INF] Saving service prometheus spec with placement vm10=a;count:1 2026-03-09T21:16:31.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:31 vm07 bash[28052]: cephadm 2026-03-09T21:16:29.620792+0000 mgr.y (mgr.14150) 263 : cephadm [INF] Saving service prometheus spec with placement vm10=a;count:1 2026-03-09T21:16:31.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:31 vm07 bash[28052]: audit 2026-03-09T21:16:30.660433+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:31.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:31 vm07 bash[28052]: audit 2026-03-09T21:16:30.660433+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:31.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:31 vm07 bash[28052]: audit 2026-03-09T21:16:30.666282+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:31.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:31 vm07 bash[28052]: audit 2026-03-09T21:16:30.666282+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:31.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:31 vm07 bash[28052]: audit 2026-03-09T21:16:30.667243+0000 mon.a (mon.0) 713 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:31.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:31 vm07 bash[28052]: audit 2026-03-09T21:16:30.667243+0000 mon.a (mon.0) 713 : audit [DBG] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:31.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:31 vm07 bash[28052]: audit 2026-03-09T21:16:30.667825+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:31.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:31 vm07 bash[28052]: audit 2026-03-09T21:16:30.667825+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:31.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:31 vm07 bash[28052]: audit 2026-03-09T21:16:30.673557+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:31.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:31 vm07 bash[28052]: audit 2026-03-09T21:16:30.673557+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:32.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:32 vm10 bash[23387]: cluster 2026-03-09T21:16:30.635423+0000 mgr.y (mgr.14150) 264 : cluster [DBG] pgmap v243: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 28 KiB/s rd, 1.9 KiB/s wr, 67 op/s 2026-03-09T21:16:32.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:32 vm10 bash[23387]: cluster 2026-03-09T21:16:30.635423+0000 mgr.y (mgr.14150) 264 : cluster [DBG] pgmap v243: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 28 KiB/s rd, 1.9 KiB/s wr, 67 op/s 2026-03-09T21:16:32.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:32 vm10 bash[23387]: cephadm 2026-03-09T21:16:30.844500+0000 mgr.y (mgr.14150) 265 : cephadm [INF] Deploying daemon prometheus.a on vm10 2026-03-09T21:16:32.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:32 vm10 bash[23387]: cephadm 2026-03-09T21:16:30.844500+0000 mgr.y (mgr.14150) 265 : cephadm [INF] Deploying daemon prometheus.a on vm10 2026-03-09T21:16:32.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:32 vm07 bash[20771]: cluster 2026-03-09T21:16:30.635423+0000 mgr.y (mgr.14150) 264 : cluster [DBG] pgmap v243: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 28 KiB/s rd, 1.9 KiB/s wr, 67 op/s 2026-03-09T21:16:32.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:32 vm07 bash[20771]: cluster 2026-03-09T21:16:30.635423+0000 mgr.y (mgr.14150) 264 : cluster [DBG] pgmap v243: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 28 KiB/s rd, 1.9 KiB/s wr, 67 op/s 2026-03-09T21:16:32.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:32 vm07 bash[20771]: cephadm 2026-03-09T21:16:30.844500+0000 mgr.y (mgr.14150) 265 : cephadm [INF] Deploying daemon prometheus.a on vm10 2026-03-09T21:16:32.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:32 vm07 bash[20771]: cephadm 2026-03-09T21:16:30.844500+0000 mgr.y (mgr.14150) 265 : cephadm [INF] Deploying daemon prometheus.a on vm10 2026-03-09T21:16:32.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:32 vm07 bash[28052]: cluster 2026-03-09T21:16:30.635423+0000 mgr.y (mgr.14150) 264 : cluster [DBG] pgmap v243: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 28 KiB/s rd, 1.9 KiB/s wr, 67 op/s 2026-03-09T21:16:32.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:32 vm07 bash[28052]: cluster 2026-03-09T21:16:30.635423+0000 mgr.y (mgr.14150) 264 : cluster [DBG] pgmap v243: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 28 KiB/s rd, 1.9 KiB/s wr, 67 op/s 2026-03-09T21:16:32.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:32 vm07 bash[28052]: cephadm 2026-03-09T21:16:30.844500+0000 mgr.y (mgr.14150) 265 : cephadm [INF] Deploying daemon prometheus.a on vm10 2026-03-09T21:16:32.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:32 vm07 bash[28052]: cephadm 2026-03-09T21:16:30.844500+0000 mgr.y (mgr.14150) 265 : cephadm [INF] Deploying daemon prometheus.a on vm10 2026-03-09T21:16:34.356 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.b/config 2026-03-09T21:16:34.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:34 vm07 bash[20771]: cluster 2026-03-09T21:16:32.635945+0000 mgr.y (mgr.14150) 266 : cluster [DBG] pgmap v244: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.4 KiB/s rd, 102 B/s wr, 10 op/s 2026-03-09T21:16:34.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:34 vm07 bash[20771]: cluster 2026-03-09T21:16:32.635945+0000 mgr.y (mgr.14150) 266 : cluster [DBG] pgmap v244: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.4 KiB/s rd, 102 B/s wr, 10 op/s 2026-03-09T21:16:34.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:34 vm07 bash[28052]: cluster 2026-03-09T21:16:32.635945+0000 mgr.y (mgr.14150) 266 : cluster [DBG] pgmap v244: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.4 KiB/s rd, 102 B/s wr, 10 op/s 2026-03-09T21:16:34.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:34 vm07 bash[28052]: cluster 2026-03-09T21:16:32.635945+0000 mgr.y (mgr.14150) 266 : cluster [DBG] pgmap v244: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.4 KiB/s rd, 102 B/s wr, 10 op/s 2026-03-09T21:16:34.617 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:34 vm10 bash[23387]: cluster 2026-03-09T21:16:32.635945+0000 mgr.y (mgr.14150) 266 : cluster [DBG] pgmap v244: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.4 KiB/s rd, 102 B/s wr, 10 op/s 2026-03-09T21:16:34.617 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:34 vm10 bash[23387]: cluster 2026-03-09T21:16:32.635945+0000 mgr.y (mgr.14150) 266 : cluster [DBG] pgmap v244: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.4 KiB/s rd, 102 B/s wr, 10 op/s 2026-03-09T21:16:35.042 INFO:teuthology.orchestra.run.vm10.stdout:Scheduled node-exporter update... 2026-03-09T21:16:35.173 DEBUG:teuthology.orchestra.run.vm07:node-exporter.a> sudo journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@node-exporter.a.service 2026-03-09T21:16:35.174 DEBUG:teuthology.orchestra.run.vm10:node-exporter.b> sudo journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@node-exporter.b.service 2026-03-09T21:16:35.175 INFO:tasks.cephadm:Adding alertmanager.a on vm07 2026-03-09T21:16:35.175 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph orch apply alertmanager '1;vm07=a' 2026-03-09T21:16:36.172 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:36 vm10 bash[23387]: cluster 2026-03-09T21:16:34.636495+0000 mgr.y (mgr.14150) 267 : cluster [DBG] pgmap v245: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 102 B/s wr, 1 op/s 2026-03-09T21:16:36.173 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:36 vm10 bash[23387]: cluster 2026-03-09T21:16:34.636495+0000 mgr.y (mgr.14150) 267 : cluster [DBG] pgmap v245: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 102 B/s wr, 1 op/s 2026-03-09T21:16:36.173 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:36 vm10 bash[23387]: audit 2026-03-09T21:16:35.031908+0000 mgr.y (mgr.14150) 268 : audit [DBG] from='client.24421 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm07=a;vm10=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:16:36.173 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:36 vm10 bash[23387]: audit 2026-03-09T21:16:35.031908+0000 mgr.y (mgr.14150) 268 : audit [DBG] from='client.24421 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm07=a;vm10=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:16:36.173 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:36 vm10 bash[23387]: cephadm 2026-03-09T21:16:35.032913+0000 mgr.y (mgr.14150) 269 : cephadm [INF] Saving service node-exporter spec with placement vm07=a;vm10=b;count:2 2026-03-09T21:16:36.173 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:36 vm10 bash[23387]: cephadm 2026-03-09T21:16:35.032913+0000 mgr.y (mgr.14150) 269 : cephadm [INF] Saving service node-exporter spec with placement vm07=a;vm10=b;count:2 2026-03-09T21:16:36.173 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:36 vm10 bash[23387]: audit 2026-03-09T21:16:35.041495+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:36.173 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:36 vm10 bash[23387]: audit 2026-03-09T21:16:35.041495+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:36.173 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:36 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:16:36.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:36 vm07 bash[20771]: cluster 2026-03-09T21:16:34.636495+0000 mgr.y (mgr.14150) 267 : cluster [DBG] pgmap v245: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 102 B/s wr, 1 op/s 2026-03-09T21:16:36.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:36 vm07 bash[20771]: cluster 2026-03-09T21:16:34.636495+0000 mgr.y (mgr.14150) 267 : cluster [DBG] pgmap v245: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 102 B/s wr, 1 op/s 2026-03-09T21:16:36.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:36 vm07 bash[20771]: audit 2026-03-09T21:16:35.031908+0000 mgr.y (mgr.14150) 268 : audit [DBG] from='client.24421 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm07=a;vm10=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:16:36.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:36 vm07 bash[20771]: audit 2026-03-09T21:16:35.031908+0000 mgr.y (mgr.14150) 268 : audit [DBG] from='client.24421 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm07=a;vm10=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:16:36.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:36 vm07 bash[20771]: cephadm 2026-03-09T21:16:35.032913+0000 mgr.y (mgr.14150) 269 : cephadm [INF] Saving service node-exporter spec with placement vm07=a;vm10=b;count:2 2026-03-09T21:16:36.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:36 vm07 bash[20771]: cephadm 2026-03-09T21:16:35.032913+0000 mgr.y (mgr.14150) 269 : cephadm [INF] Saving service node-exporter spec with placement vm07=a;vm10=b;count:2 2026-03-09T21:16:36.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:36 vm07 bash[20771]: audit 2026-03-09T21:16:35.041495+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:36.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:36 vm07 bash[20771]: audit 2026-03-09T21:16:35.041495+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:36.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:36 vm07 bash[28052]: cluster 2026-03-09T21:16:34.636495+0000 mgr.y (mgr.14150) 267 : cluster [DBG] pgmap v245: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 102 B/s wr, 1 op/s 2026-03-09T21:16:36.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:36 vm07 bash[28052]: cluster 2026-03-09T21:16:34.636495+0000 mgr.y (mgr.14150) 267 : cluster [DBG] pgmap v245: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 102 B/s wr, 1 op/s 2026-03-09T21:16:36.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:36 vm07 bash[28052]: audit 2026-03-09T21:16:35.031908+0000 mgr.y (mgr.14150) 268 : audit [DBG] from='client.24421 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm07=a;vm10=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:16:36.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:36 vm07 bash[28052]: audit 2026-03-09T21:16:35.031908+0000 mgr.y (mgr.14150) 268 : audit [DBG] from='client.24421 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm07=a;vm10=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:16:36.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:36 vm07 bash[28052]: cephadm 2026-03-09T21:16:35.032913+0000 mgr.y (mgr.14150) 269 : cephadm [INF] Saving service node-exporter spec with placement vm07=a;vm10=b;count:2 2026-03-09T21:16:36.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:36 vm07 bash[28052]: cephadm 2026-03-09T21:16:35.032913+0000 mgr.y (mgr.14150) 269 : cephadm [INF] Saving service node-exporter spec with placement vm07=a;vm10=b;count:2 2026-03-09T21:16:36.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:36 vm07 bash[28052]: audit 2026-03-09T21:16:35.041495+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:36.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:36 vm07 bash[28052]: audit 2026-03-09T21:16:35.041495+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:36.775 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:36 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:36.775 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:36 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:36.775 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:36 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:36.775 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:36 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:36.775 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 09 21:16:36 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:36.775 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 09 21:16:36 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:36.775 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:16:36 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:36.775 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:16:36 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:36.775 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:16:36 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:36.775 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:16:36 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:36.775 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:16:36 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:36.775 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:16:36 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:36.775 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:36 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:36.775 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:36 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:36.775 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:16:36 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:36.775 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:16:36 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:36.775 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:16:36 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:36.775 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:16:36 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:36.775 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:16:36 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:37.102 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:16:36 vm10 systemd[1]: Started Ceph prometheus.a for 22c897f4-1bfc-11f1-adaa-13127443f8b3. 2026-03-09T21:16:37.102 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:16:36 vm10 bash[49946]: ts=2026-03-09T21:16:36.917Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-09T21:16:37.102 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:16:36 vm10 bash[49946]: ts=2026-03-09T21:16:36.917Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-09T21:16:37.102 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:16:36 vm10 bash[49946]: ts=2026-03-09T21:16:36.917Z caller=main.go:623 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm10 (none))" 2026-03-09T21:16:37.102 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:16:36 vm10 bash[49946]: ts=2026-03-09T21:16:36.917Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-09T21:16:37.102 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:16:36 vm10 bash[49946]: ts=2026-03-09T21:16:36.917Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-09T21:16:37.102 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:16:36 vm10 bash[49946]: ts=2026-03-09T21:16:36.920Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-09T21:16:37.102 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:16:36 vm10 bash[49946]: ts=2026-03-09T21:16:36.921Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-09T21:16:37.102 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:16:36 vm10 bash[49946]: ts=2026-03-09T21:16:36.923Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-09T21:16:37.102 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:16:36 vm10 bash[49946]: ts=2026-03-09T21:16:36.923Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.283µs 2026-03-09T21:16:37.102 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:16:36 vm10 bash[49946]: ts=2026-03-09T21:16:36.923Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-09T21:16:37.102 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:16:36 vm10 bash[49946]: ts=2026-03-09T21:16:36.923Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 2026-03-09T21:16:37.102 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:16:36 vm10 bash[49946]: ts=2026-03-09T21:16:36.923Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=15.339µs wal_replay_duration=173.736µs wbl_replay_duration=141ns total_replay_duration=206.527µs 2026-03-09T21:16:37.102 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:16:36 vm10 bash[49946]: ts=2026-03-09T21:16:36.925Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-09T21:16:37.102 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:16:36 vm10 bash[49946]: ts=2026-03-09T21:16:36.925Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-09T21:16:37.102 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:16:36 vm10 bash[49946]: ts=2026-03-09T21:16:36.925Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-09T21:16:37.102 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:16:36 vm10 bash[49946]: ts=2026-03-09T21:16:36.926Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-09T21:16:37.102 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:16:36 vm10 bash[49946]: ts=2026-03-09T21:16:36.926Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-09T21:16:37.103 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:16:36 vm10 bash[49946]: ts=2026-03-09T21:16:36.940Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=15.110827ms db_storage=951ns remote_storage=1.073µs web_handler=481ns query_engine=1.082µs scrape=4.96283ms scrape_sd=111.76µs notify=621ns notify_sd=952ns rules=9.759539ms tracing=9.417µs 2026-03-09T21:16:37.103 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:16:36 vm10 bash[49946]: ts=2026-03-09T21:16:36.940Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-09T21:16:37.103 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:16:36 vm10 bash[49946]: ts=2026-03-09T21:16:36.941Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-09T21:16:37.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:37 vm07 bash[20771]: audit 2026-03-09T21:16:36.150727+0000 mgr.y (mgr.14150) 270 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:16:37.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:37 vm07 bash[20771]: audit 2026-03-09T21:16:36.150727+0000 mgr.y (mgr.14150) 270 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:16:37.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:37 vm07 bash[20771]: audit 2026-03-09T21:16:36.814557+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:37.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:37 vm07 bash[20771]: audit 2026-03-09T21:16:36.814557+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:37.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:37 vm07 bash[20771]: audit 2026-03-09T21:16:36.820185+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:37.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:37 vm07 bash[20771]: audit 2026-03-09T21:16:36.820185+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:37.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:37 vm07 bash[20771]: audit 2026-03-09T21:16:36.827461+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:37.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:37 vm07 bash[20771]: audit 2026-03-09T21:16:36.827461+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:37.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:37 vm07 bash[20771]: audit 2026-03-09T21:16:36.830502+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T21:16:37.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:37 vm07 bash[20771]: audit 2026-03-09T21:16:36.830502+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T21:16:37.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:37 vm07 bash[28052]: audit 2026-03-09T21:16:36.150727+0000 mgr.y (mgr.14150) 270 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:16:37.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:37 vm07 bash[28052]: audit 2026-03-09T21:16:36.150727+0000 mgr.y (mgr.14150) 270 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:16:37.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:37 vm07 bash[28052]: audit 2026-03-09T21:16:36.814557+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:37.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:37 vm07 bash[28052]: audit 2026-03-09T21:16:36.814557+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:37.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:37 vm07 bash[28052]: audit 2026-03-09T21:16:36.820185+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:37.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:37 vm07 bash[28052]: audit 2026-03-09T21:16:36.820185+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:37.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:37 vm07 bash[28052]: audit 2026-03-09T21:16:36.827461+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:37.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:37 vm07 bash[28052]: audit 2026-03-09T21:16:36.827461+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:37.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:37 vm07 bash[28052]: audit 2026-03-09T21:16:36.830502+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T21:16:37.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:37 vm07 bash[28052]: audit 2026-03-09T21:16:36.830502+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T21:16:37.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:37 vm10 bash[23387]: audit 2026-03-09T21:16:36.150727+0000 mgr.y (mgr.14150) 270 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:16:37.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:37 vm10 bash[23387]: audit 2026-03-09T21:16:36.150727+0000 mgr.y (mgr.14150) 270 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:16:37.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:37 vm10 bash[23387]: audit 2026-03-09T21:16:36.814557+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:37.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:37 vm10 bash[23387]: audit 2026-03-09T21:16:36.814557+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:37.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:37 vm10 bash[23387]: audit 2026-03-09T21:16:36.820185+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:37.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:37 vm10 bash[23387]: audit 2026-03-09T21:16:36.820185+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:37.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:37 vm10 bash[23387]: audit 2026-03-09T21:16:36.827461+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:37.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:37 vm10 bash[23387]: audit 2026-03-09T21:16:36.827461+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:37.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:37 vm10 bash[23387]: audit 2026-03-09T21:16:36.830502+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T21:16:37.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:37 vm10 bash[23387]: audit 2026-03-09T21:16:36.830502+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T21:16:38.149 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:37 vm10 bash[24097]: ignoring --setuser ceph since I am not root 2026-03-09T21:16:38.149 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:37 vm10 bash[24097]: ignoring --setgroup ceph since I am not root 2026-03-09T21:16:38.149 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:37 vm10 bash[24097]: debug 2026-03-09T21:16:37.977+0000 7f05c158d140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T21:16:38.149 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:38 vm10 bash[24097]: debug 2026-03-09T21:16:38.013+0000 7f05c158d140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T21:16:38.149 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:38 vm10 bash[24097]: debug 2026-03-09T21:16:38.145+0000 7f05c158d140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T21:16:38.149 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:38 vm10 bash[23387]: cluster 2026-03-09T21:16:36.637067+0000 mgr.y (mgr.14150) 271 : cluster [DBG] pgmap v246: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 102 B/s wr, 1 op/s 2026-03-09T21:16:38.150 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:38 vm10 bash[23387]: cluster 2026-03-09T21:16:36.637067+0000 mgr.y (mgr.14150) 271 : cluster [DBG] pgmap v246: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 102 B/s wr, 1 op/s 2026-03-09T21:16:38.150 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:38 vm10 bash[23387]: audit 2026-03-09T21:16:37.656697+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:38.150 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:38 vm10 bash[23387]: audit 2026-03-09T21:16:37.656697+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:38.150 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:38 vm10 bash[23387]: audit 2026-03-09T21:16:37.841020+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T21:16:38.150 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:38 vm10 bash[23387]: audit 2026-03-09T21:16:37.841020+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T21:16:38.150 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:38 vm10 bash[23387]: cluster 2026-03-09T21:16:37.858320+0000 mon.a (mon.0) 723 : cluster [DBG] mgrmap e17: y(active, since 6m), standbys: x 2026-03-09T21:16:38.150 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:38 vm10 bash[23387]: cluster 2026-03-09T21:16:37.858320+0000 mon.a (mon.0) 723 : cluster [DBG] mgrmap e17: y(active, since 6m), standbys: x 2026-03-09T21:16:38.165 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:38 vm07 bash[20771]: cluster 2026-03-09T21:16:36.637067+0000 mgr.y (mgr.14150) 271 : cluster [DBG] pgmap v246: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 102 B/s wr, 1 op/s 2026-03-09T21:16:38.165 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:38 vm07 bash[20771]: cluster 2026-03-09T21:16:36.637067+0000 mgr.y (mgr.14150) 271 : cluster [DBG] pgmap v246: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 102 B/s wr, 1 op/s 2026-03-09T21:16:38.165 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:38 vm07 bash[20771]: audit 2026-03-09T21:16:37.656697+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:38.165 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:38 vm07 bash[20771]: audit 2026-03-09T21:16:37.656697+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:38.165 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:38 vm07 bash[20771]: audit 2026-03-09T21:16:37.841020+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T21:16:38.165 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:38 vm07 bash[20771]: audit 2026-03-09T21:16:37.841020+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T21:16:38.166 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:38 vm07 bash[20771]: cluster 2026-03-09T21:16:37.858320+0000 mon.a (mon.0) 723 : cluster [DBG] mgrmap e17: y(active, since 6m), standbys: x 2026-03-09T21:16:38.166 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:38 vm07 bash[20771]: cluster 2026-03-09T21:16:37.858320+0000 mon.a (mon.0) 723 : cluster [DBG] mgrmap e17: y(active, since 6m), standbys: x 2026-03-09T21:16:38.166 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:38 vm07 bash[28052]: cluster 2026-03-09T21:16:36.637067+0000 mgr.y (mgr.14150) 271 : cluster [DBG] pgmap v246: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 102 B/s wr, 1 op/s 2026-03-09T21:16:38.166 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:38 vm07 bash[28052]: cluster 2026-03-09T21:16:36.637067+0000 mgr.y (mgr.14150) 271 : cluster [DBG] pgmap v246: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 102 B/s wr, 1 op/s 2026-03-09T21:16:38.166 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:38 vm07 bash[28052]: audit 2026-03-09T21:16:37.656697+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:38.166 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:38 vm07 bash[28052]: audit 2026-03-09T21:16:37.656697+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' 2026-03-09T21:16:38.166 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:38 vm07 bash[28052]: audit 2026-03-09T21:16:37.841020+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T21:16:38.166 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:38 vm07 bash[28052]: audit 2026-03-09T21:16:37.841020+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.14150 192.168.123.107:0/1520190511' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T21:16:38.166 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:38 vm07 bash[28052]: cluster 2026-03-09T21:16:37.858320+0000 mon.a (mon.0) 723 : cluster [DBG] mgrmap e17: y(active, since 6m), standbys: x 2026-03-09T21:16:38.166 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:38 vm07 bash[28052]: cluster 2026-03-09T21:16:37.858320+0000 mon.a (mon.0) 723 : cluster [DBG] mgrmap e17: y(active, since 6m), standbys: x 2026-03-09T21:16:38.166 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:37 vm07 bash[21040]: ignoring --setuser ceph since I am not root 2026-03-09T21:16:38.166 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:37 vm07 bash[21040]: ignoring --setgroup ceph since I am not root 2026-03-09T21:16:38.166 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:37 vm07 bash[21040]: debug 2026-03-09T21:16:37.973+0000 7f71770c6140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T21:16:38.166 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:38 vm07 bash[21040]: debug 2026-03-09T21:16:38.013+0000 7f71770c6140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T21:16:38.515 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:38 vm07 bash[21040]: debug 2026-03-09T21:16:38.165+0000 7f71770c6140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T21:16:38.865 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:38 vm07 bash[21040]: debug 2026-03-09T21:16:38.513+0000 7f71770c6140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T21:16:38.942 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:38 vm10 bash[24097]: debug 2026-03-09T21:16:38.481+0000 7f05c158d140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T21:16:39.334 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:39 vm07 bash[21040]: debug 2026-03-09T21:16:39.049+0000 7f71770c6140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T21:16:39.334 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:39 vm07 bash[21040]: debug 2026-03-09T21:16:39.173+0000 7f71770c6140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T21:16:39.404 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:38 vm10 bash[24097]: debug 2026-03-09T21:16:38.981+0000 7f05c158d140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T21:16:39.405 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:39 vm10 bash[24097]: debug 2026-03-09T21:16:39.081+0000 7f05c158d140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T21:16:39.405 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:39 vm10 bash[24097]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T21:16:39.405 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:39 vm10 bash[24097]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T21:16:39.405 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:39 vm10 bash[24097]: from numpy import show_config as show_numpy_config 2026-03-09T21:16:39.405 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:39 vm10 bash[24097]: debug 2026-03-09T21:16:39.229+0000 7f05c158d140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T21:16:39.405 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:39 vm10 bash[24097]: debug 2026-03-09T21:16:39.401+0000 7f05c158d140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T21:16:39.608 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:39 vm07 bash[21040]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T21:16:39.609 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:39 vm07 bash[21040]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T21:16:39.609 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:39 vm07 bash[21040]: from numpy import show_config as show_numpy_config 2026-03-09T21:16:39.609 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:39 vm07 bash[21040]: debug 2026-03-09T21:16:39.337+0000 7f71770c6140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T21:16:39.609 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:39 vm07 bash[21040]: debug 2026-03-09T21:16:39.505+0000 7f71770c6140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T21:16:39.609 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:39 vm07 bash[21040]: debug 2026-03-09T21:16:39.549+0000 7f71770c6140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T21:16:39.692 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:39 vm10 bash[24097]: debug 2026-03-09T21:16:39.449+0000 7f05c158d140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T21:16:39.692 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:39 vm10 bash[24097]: debug 2026-03-09T21:16:39.497+0000 7f05c158d140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T21:16:39.692 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:39 vm10 bash[24097]: debug 2026-03-09T21:16:39.545+0000 7f05c158d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T21:16:39.692 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:39 vm10 bash[24097]: debug 2026-03-09T21:16:39.601+0000 7f05c158d140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T21:16:39.865 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:39 vm07 bash[21040]: debug 2026-03-09T21:16:39.605+0000 7f71770c6140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T21:16:39.865 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:39 vm07 bash[21040]: debug 2026-03-09T21:16:39.653+0000 7f71770c6140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T21:16:39.865 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:39 vm07 bash[21040]: debug 2026-03-09T21:16:39.709+0000 7f71770c6140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T21:16:40.395 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:40 vm10 bash[24097]: debug 2026-03-09T21:16:40.121+0000 7f05c158d140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T21:16:40.395 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:40 vm10 bash[24097]: debug 2026-03-09T21:16:40.157+0000 7f05c158d140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T21:16:40.395 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:40 vm10 bash[24097]: debug 2026-03-09T21:16:40.193+0000 7f05c158d140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T21:16:40.395 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:40 vm10 bash[24097]: debug 2026-03-09T21:16:40.349+0000 7f05c158d140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T21:16:40.469 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:40 vm07 bash[21040]: debug 2026-03-09T21:16:40.193+0000 7f71770c6140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T21:16:40.469 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:40 vm07 bash[21040]: debug 2026-03-09T21:16:40.241+0000 7f71770c6140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T21:16:40.469 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:40 vm07 bash[21040]: debug 2026-03-09T21:16:40.293+0000 7f71770c6140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T21:16:40.692 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:40 vm10 bash[24097]: debug 2026-03-09T21:16:40.393+0000 7f05c158d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T21:16:40.692 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:40 vm10 bash[24097]: debug 2026-03-09T21:16:40.433+0000 7f05c158d140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T21:16:40.692 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:40 vm10 bash[24097]: debug 2026-03-09T21:16:40.553+0000 7f05c158d140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T21:16:40.865 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:40 vm07 bash[21040]: debug 2026-03-09T21:16:40.465+0000 7f71770c6140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T21:16:40.865 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:40 vm07 bash[21040]: debug 2026-03-09T21:16:40.521+0000 7f71770c6140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T21:16:40.865 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:40 vm07 bash[21040]: debug 2026-03-09T21:16:40.569+0000 7f71770c6140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T21:16:40.865 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:40 vm07 bash[21040]: debug 2026-03-09T21:16:40.697+0000 7f71770c6140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T21:16:40.875 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.b/config 2026-03-09T21:16:40.996 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:40 vm10 bash[24097]: debug 2026-03-09T21:16:40.729+0000 7f05c158d140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T21:16:40.996 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:40 vm10 bash[24097]: debug 2026-03-09T21:16:40.945+0000 7f05c158d140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T21:16:41.186 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:40 vm07 bash[21040]: debug 2026-03-09T21:16:40.889+0000 7f71770c6140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T21:16:41.186 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:41 vm07 bash[21040]: debug 2026-03-09T21:16:41.129+0000 7f71770c6140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T21:16:41.264 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:41 vm10 bash[24097]: debug 2026-03-09T21:16:41.009+0000 7f05c158d140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T21:16:41.264 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:41 vm10 bash[24097]: debug 2026-03-09T21:16:41.069+0000 7f05c158d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T21:16:41.544 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:41 vm10 bash[24097]: debug 2026-03-09T21:16:41.261+0000 7f05c158d140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T21:16:41.615 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:41 vm07 bash[21040]: debug 2026-03-09T21:16:41.185+0000 7f71770c6140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T21:16:41.615 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:41 vm07 bash[21040]: debug 2026-03-09T21:16:41.237+0000 7f71770c6140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T21:16:41.615 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:41 vm07 bash[21040]: debug 2026-03-09T21:16:41.401+0000 7f71770c6140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T21:16:41.797 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:41 vm10 bash[24097]: debug 2026-03-09T21:16:41.541+0000 7f05c158d140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T21:16:41.797 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:41 vm10 bash[24097]: [09/Mar/2026:21:16:41] ENGINE Bus STARTING 2026-03-09T21:16:41.797 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:41 vm10 bash[24097]: CherryPy Checker: 2026-03-09T21:16:41.797 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:41 vm10 bash[24097]: The Application mounted at '' has an empty config. 2026-03-09T21:16:41.797 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:41 vm10 bash[24097]: [09/Mar/2026:21:16:41] ENGINE Serving on http://:::9283 2026-03-09T21:16:41.797 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:41 vm10 bash[24097]: [09/Mar/2026:21:16:41] ENGINE Bus STARTED 2026-03-09T21:16:41.797 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:41 vm10 bash[23387]: cluster 2026-03-09T21:16:41.549799+0000 mon.a (mon.0) 724 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T21:16:41.797 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:41 vm10 bash[23387]: cluster 2026-03-09T21:16:41.549799+0000 mon.a (mon.0) 724 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T21:16:41.797 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:41 vm10 bash[23387]: cluster 2026-03-09T21:16:41.550053+0000 mon.a (mon.0) 725 : cluster [DBG] Standby manager daemon x started 2026-03-09T21:16:41.797 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:41 vm10 bash[23387]: cluster 2026-03-09T21:16:41.550053+0000 mon.a (mon.0) 725 : cluster [DBG] Standby manager daemon x started 2026-03-09T21:16:41.797 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:41 vm10 bash[23387]: audit 2026-03-09T21:16:41.550410+0000 mon.b (mon.1) 32 : audit [DBG] from='mgr.? 192.168.123.110:0/1039909002' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T21:16:41.797 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:41 vm10 bash[23387]: audit 2026-03-09T21:16:41.550410+0000 mon.b (mon.1) 32 : audit [DBG] from='mgr.? 192.168.123.110:0/1039909002' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T21:16:41.797 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:41 vm10 bash[23387]: audit 2026-03-09T21:16:41.551232+0000 mon.b (mon.1) 33 : audit [DBG] from='mgr.? 192.168.123.110:0/1039909002' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T21:16:41.797 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:41 vm10 bash[23387]: audit 2026-03-09T21:16:41.551232+0000 mon.b (mon.1) 33 : audit [DBG] from='mgr.? 192.168.123.110:0/1039909002' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T21:16:41.797 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:41 vm10 bash[23387]: audit 2026-03-09T21:16:41.552855+0000 mon.b (mon.1) 34 : audit [DBG] from='mgr.? 192.168.123.110:0/1039909002' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T21:16:41.797 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:41 vm10 bash[23387]: audit 2026-03-09T21:16:41.552855+0000 mon.b (mon.1) 34 : audit [DBG] from='mgr.? 192.168.123.110:0/1039909002' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T21:16:41.797 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:41 vm10 bash[23387]: audit 2026-03-09T21:16:41.553315+0000 mon.b (mon.1) 35 : audit [DBG] from='mgr.? 192.168.123.110:0/1039909002' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T21:16:41.797 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:41 vm10 bash[23387]: audit 2026-03-09T21:16:41.553315+0000 mon.b (mon.1) 35 : audit [DBG] from='mgr.? 192.168.123.110:0/1039909002' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T21:16:42.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:41 vm07 bash[20771]: cluster 2026-03-09T21:16:41.549799+0000 mon.a (mon.0) 724 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T21:16:42.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:41 vm07 bash[20771]: cluster 2026-03-09T21:16:41.549799+0000 mon.a (mon.0) 724 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T21:16:42.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:41 vm07 bash[20771]: cluster 2026-03-09T21:16:41.550053+0000 mon.a (mon.0) 725 : cluster [DBG] Standby manager daemon x started 2026-03-09T21:16:42.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:41 vm07 bash[20771]: cluster 2026-03-09T21:16:41.550053+0000 mon.a (mon.0) 725 : cluster [DBG] Standby manager daemon x started 2026-03-09T21:16:42.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:41 vm07 bash[20771]: audit 2026-03-09T21:16:41.550410+0000 mon.b (mon.1) 32 : audit [DBG] from='mgr.? 192.168.123.110:0/1039909002' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T21:16:42.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:41 vm07 bash[20771]: audit 2026-03-09T21:16:41.550410+0000 mon.b (mon.1) 32 : audit [DBG] from='mgr.? 192.168.123.110:0/1039909002' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T21:16:42.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:41 vm07 bash[20771]: audit 2026-03-09T21:16:41.551232+0000 mon.b (mon.1) 33 : audit [DBG] from='mgr.? 192.168.123.110:0/1039909002' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T21:16:42.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:41 vm07 bash[20771]: audit 2026-03-09T21:16:41.551232+0000 mon.b (mon.1) 33 : audit [DBG] from='mgr.? 192.168.123.110:0/1039909002' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T21:16:42.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:41 vm07 bash[20771]: audit 2026-03-09T21:16:41.552855+0000 mon.b (mon.1) 34 : audit [DBG] from='mgr.? 192.168.123.110:0/1039909002' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T21:16:42.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:41 vm07 bash[20771]: audit 2026-03-09T21:16:41.552855+0000 mon.b (mon.1) 34 : audit [DBG] from='mgr.? 192.168.123.110:0/1039909002' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T21:16:42.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:41 vm07 bash[20771]: audit 2026-03-09T21:16:41.553315+0000 mon.b (mon.1) 35 : audit [DBG] from='mgr.? 192.168.123.110:0/1039909002' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T21:16:42.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:41 vm07 bash[20771]: audit 2026-03-09T21:16:41.553315+0000 mon.b (mon.1) 35 : audit [DBG] from='mgr.? 192.168.123.110:0/1039909002' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T21:16:42.115 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:41 vm07 bash[21040]: debug 2026-03-09T21:16:41.681+0000 7f71770c6140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T21:16:42.115 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:41 vm07 bash[21040]: [09/Mar/2026:21:16:41] ENGINE Bus STARTING 2026-03-09T21:16:42.115 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:41 vm07 bash[21040]: CherryPy Checker: 2026-03-09T21:16:42.115 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:41 vm07 bash[21040]: The Application mounted at '' has an empty config. 2026-03-09T21:16:42.115 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:41 vm07 bash[21040]: [09/Mar/2026:21:16:41] ENGINE Serving on http://:::9283 2026-03-09T21:16:42.115 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:41 vm07 bash[21040]: [09/Mar/2026:21:16:41] ENGINE Bus STARTED 2026-03-09T21:16:42.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:41 vm07 bash[28052]: cluster 2026-03-09T21:16:41.549799+0000 mon.a (mon.0) 724 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T21:16:42.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:41 vm07 bash[28052]: cluster 2026-03-09T21:16:41.549799+0000 mon.a (mon.0) 724 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T21:16:42.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:41 vm07 bash[28052]: cluster 2026-03-09T21:16:41.550053+0000 mon.a (mon.0) 725 : cluster [DBG] Standby manager daemon x started 2026-03-09T21:16:42.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:41 vm07 bash[28052]: cluster 2026-03-09T21:16:41.550053+0000 mon.a (mon.0) 725 : cluster [DBG] Standby manager daemon x started 2026-03-09T21:16:42.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:41 vm07 bash[28052]: audit 2026-03-09T21:16:41.550410+0000 mon.b (mon.1) 32 : audit [DBG] from='mgr.? 192.168.123.110:0/1039909002' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T21:16:42.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:41 vm07 bash[28052]: audit 2026-03-09T21:16:41.550410+0000 mon.b (mon.1) 32 : audit [DBG] from='mgr.? 192.168.123.110:0/1039909002' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T21:16:42.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:41 vm07 bash[28052]: audit 2026-03-09T21:16:41.551232+0000 mon.b (mon.1) 33 : audit [DBG] from='mgr.? 192.168.123.110:0/1039909002' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T21:16:42.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:41 vm07 bash[28052]: audit 2026-03-09T21:16:41.551232+0000 mon.b (mon.1) 33 : audit [DBG] from='mgr.? 192.168.123.110:0/1039909002' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T21:16:42.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:41 vm07 bash[28052]: audit 2026-03-09T21:16:41.552855+0000 mon.b (mon.1) 34 : audit [DBG] from='mgr.? 192.168.123.110:0/1039909002' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T21:16:42.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:41 vm07 bash[28052]: audit 2026-03-09T21:16:41.552855+0000 mon.b (mon.1) 34 : audit [DBG] from='mgr.? 192.168.123.110:0/1039909002' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T21:16:42.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:41 vm07 bash[28052]: audit 2026-03-09T21:16:41.553315+0000 mon.b (mon.1) 35 : audit [DBG] from='mgr.? 192.168.123.110:0/1039909002' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T21:16:42.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:41 vm07 bash[28052]: audit 2026-03-09T21:16:41.553315+0000 mon.b (mon.1) 35 : audit [DBG] from='mgr.? 192.168.123.110:0/1039909002' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T21:16:42.745 INFO:teuthology.orchestra.run.vm10.stdout:Scheduled alertmanager update... 2026-03-09T21:16:42.823 DEBUG:teuthology.orchestra.run.vm07:alertmanager.a> sudo journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@alertmanager.a.service 2026-03-09T21:16:42.825 INFO:tasks.cephadm:Adding grafana.a on vm10 2026-03-09T21:16:42.825 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph orch apply grafana '1;vm10=a' 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: cluster 2026-03-09T21:16:41.615189+0000 mon.a (mon.0) 726 : cluster [DBG] mgrmap e18: y(active, since 6m), standbys: x 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: cluster 2026-03-09T21:16:41.615189+0000 mon.a (mon.0) 726 : cluster [DBG] mgrmap e18: y(active, since 6m), standbys: x 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: cluster 2026-03-09T21:16:41.686999+0000 mon.a (mon.0) 727 : cluster [INF] Active manager daemon y restarted 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: cluster 2026-03-09T21:16:41.686999+0000 mon.a (mon.0) 727 : cluster [INF] Active manager daemon y restarted 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: cluster 2026-03-09T21:16:41.687638+0000 mon.a (mon.0) 728 : cluster [INF] Activating manager daemon y 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: cluster 2026-03-09T21:16:41.687638+0000 mon.a (mon.0) 728 : cluster [INF] Activating manager daemon y 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.713812+0000 mon.c (mon.2) 16 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.713812+0000 mon.c (mon.2) 16 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.713922+0000 mon.c (mon.2) 17 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.713922+0000 mon.c (mon.2) 17 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.713975+0000 mon.c (mon.2) 18 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.713975+0000 mon.c (mon.2) 18 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: cluster 2026-03-09T21:16:41.714224+0000 mon.a (mon.0) 729 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: cluster 2026-03-09T21:16:41.714224+0000 mon.a (mon.0) 729 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: cluster 2026-03-09T21:16:41.714822+0000 mon.a (mon.0) 730 : cluster [DBG] mgrmap e19: y(active, starting, since 0.0275419s), standbys: x 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: cluster 2026-03-09T21:16:41.714822+0000 mon.a (mon.0) 730 : cluster [DBG] mgrmap e19: y(active, starting, since 0.0275419s), standbys: x 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.716357+0000 mon.c (mon.2) 19 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.716357+0000 mon.c (mon.2) 19 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.716552+0000 mon.c (mon.2) 20 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.716552+0000 mon.c (mon.2) 20 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.716683+0000 mon.c (mon.2) 21 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.716683+0000 mon.c (mon.2) 21 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.716828+0000 mon.c (mon.2) 22 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.716828+0000 mon.c (mon.2) 22 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.717281+0000 mon.c (mon.2) 23 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.717281+0000 mon.c (mon.2) 23 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.717415+0000 mon.c (mon.2) 24 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.717415+0000 mon.c (mon.2) 24 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.717777+0000 mon.c (mon.2) 25 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.717777+0000 mon.c (mon.2) 25 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.718214+0000 mon.c (mon.2) 26 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.718214+0000 mon.c (mon.2) 26 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.718496+0000 mon.c (mon.2) 27 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:16:42.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.718496+0000 mon.c (mon.2) 27 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:16:42.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.718788+0000 mon.c (mon.2) 28 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:16:42.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.718788+0000 mon.c (mon.2) 28 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:16:42.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.720029+0000 mon.c (mon.2) 29 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T21:16:42.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.720029+0000 mon.c (mon.2) 29 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T21:16:42.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.720320+0000 mon.c (mon.2) 30 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T21:16:42.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.720320+0000 mon.c (mon.2) 30 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T21:16:42.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.720732+0000 mon.c (mon.2) 31 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T21:16:42.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.720732+0000 mon.c (mon.2) 31 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T21:16:42.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: cluster 2026-03-09T21:16:41.729759+0000 mon.a (mon.0) 731 : cluster [INF] Manager daemon y is now available 2026-03-09T21:16:42.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: cluster 2026-03-09T21:16:41.729759+0000 mon.a (mon.0) 731 : cluster [INF] Manager daemon y is now available 2026-03-09T21:16:42.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.749416+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:42.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.749416+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:42.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.764910+0000 mon.c (mon.2) 32 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:16:42.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.764910+0000 mon.c (mon.2) 32 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:16:42.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.768820+0000 mon.c (mon.2) 33 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:42.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.768820+0000 mon.c (mon.2) 33 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:42.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.771275+0000 mon.c (mon.2) 34 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T21:16:42.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.771275+0000 mon.c (mon.2) 34 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T21:16:42.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.771594+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T21:16:42.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.771594+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T21:16:42.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.826888+0000 mon.c (mon.2) 35 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T21:16:42.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.826888+0000 mon.c (mon.2) 35 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T21:16:42.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.827298+0000 mon.a (mon.0) 734 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T21:16:42.944 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:42 vm10 bash[23387]: audit 2026-03-09T21:16:41.827298+0000 mon.a (mon.0) 734 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T21:16:43.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: cluster 2026-03-09T21:16:41.615189+0000 mon.a (mon.0) 726 : cluster [DBG] mgrmap e18: y(active, since 6m), standbys: x 2026-03-09T21:16:43.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: cluster 2026-03-09T21:16:41.615189+0000 mon.a (mon.0) 726 : cluster [DBG] mgrmap e18: y(active, since 6m), standbys: x 2026-03-09T21:16:43.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: cluster 2026-03-09T21:16:41.686999+0000 mon.a (mon.0) 727 : cluster [INF] Active manager daemon y restarted 2026-03-09T21:16:43.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: cluster 2026-03-09T21:16:41.686999+0000 mon.a (mon.0) 727 : cluster [INF] Active manager daemon y restarted 2026-03-09T21:16:43.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: cluster 2026-03-09T21:16:41.687638+0000 mon.a (mon.0) 728 : cluster [INF] Activating manager daemon y 2026-03-09T21:16:43.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: cluster 2026-03-09T21:16:41.687638+0000 mon.a (mon.0) 728 : cluster [INF] Activating manager daemon y 2026-03-09T21:16:43.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.713812+0000 mon.c (mon.2) 16 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:16:43.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.713812+0000 mon.c (mon.2) 16 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:16:43.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.713922+0000 mon.c (mon.2) 17 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:16:43.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.713922+0000 mon.c (mon.2) 17 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:16:43.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.713975+0000 mon.c (mon.2) 18 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:16:43.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.713975+0000 mon.c (mon.2) 18 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:16:43.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: cluster 2026-03-09T21:16:41.714224+0000 mon.a (mon.0) 729 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-09T21:16:43.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: cluster 2026-03-09T21:16:41.714224+0000 mon.a (mon.0) 729 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-09T21:16:43.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: cluster 2026-03-09T21:16:41.714822+0000 mon.a (mon.0) 730 : cluster [DBG] mgrmap e19: y(active, starting, since 0.0275419s), standbys: x 2026-03-09T21:16:43.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: cluster 2026-03-09T21:16:41.714822+0000 mon.a (mon.0) 730 : cluster [DBG] mgrmap e19: y(active, starting, since 0.0275419s), standbys: x 2026-03-09T21:16:43.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.716357+0000 mon.c (mon.2) 19 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T21:16:43.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.716357+0000 mon.c (mon.2) 19 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.716552+0000 mon.c (mon.2) 20 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.716552+0000 mon.c (mon.2) 20 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.716683+0000 mon.c (mon.2) 21 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.716683+0000 mon.c (mon.2) 21 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.716828+0000 mon.c (mon.2) 22 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.716828+0000 mon.c (mon.2) 22 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.717281+0000 mon.c (mon.2) 23 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.717281+0000 mon.c (mon.2) 23 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.717415+0000 mon.c (mon.2) 24 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.717415+0000 mon.c (mon.2) 24 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.717777+0000 mon.c (mon.2) 25 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.717777+0000 mon.c (mon.2) 25 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.718214+0000 mon.c (mon.2) 26 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.718214+0000 mon.c (mon.2) 26 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.718496+0000 mon.c (mon.2) 27 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.718496+0000 mon.c (mon.2) 27 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.718788+0000 mon.c (mon.2) 28 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.718788+0000 mon.c (mon.2) 28 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.720029+0000 mon.c (mon.2) 29 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.720029+0000 mon.c (mon.2) 29 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.720320+0000 mon.c (mon.2) 30 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.720320+0000 mon.c (mon.2) 30 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.720732+0000 mon.c (mon.2) 31 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.720732+0000 mon.c (mon.2) 31 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: cluster 2026-03-09T21:16:41.729759+0000 mon.a (mon.0) 731 : cluster [INF] Manager daemon y is now available 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: cluster 2026-03-09T21:16:41.729759+0000 mon.a (mon.0) 731 : cluster [INF] Manager daemon y is now available 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.749416+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.749416+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.764910+0000 mon.c (mon.2) 32 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.764910+0000 mon.c (mon.2) 32 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.768820+0000 mon.c (mon.2) 33 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.768820+0000 mon.c (mon.2) 33 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.771275+0000 mon.c (mon.2) 34 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.771275+0000 mon.c (mon.2) 34 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.771594+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.771594+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.826888+0000 mon.c (mon.2) 35 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.826888+0000 mon.c (mon.2) 35 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.827298+0000 mon.a (mon.0) 734 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T21:16:43.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:42 vm07 bash[20771]: audit 2026-03-09T21:16:41.827298+0000 mon.a (mon.0) 734 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: cluster 2026-03-09T21:16:41.615189+0000 mon.a (mon.0) 726 : cluster [DBG] mgrmap e18: y(active, since 6m), standbys: x 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: cluster 2026-03-09T21:16:41.615189+0000 mon.a (mon.0) 726 : cluster [DBG] mgrmap e18: y(active, since 6m), standbys: x 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: cluster 2026-03-09T21:16:41.686999+0000 mon.a (mon.0) 727 : cluster [INF] Active manager daemon y restarted 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: cluster 2026-03-09T21:16:41.686999+0000 mon.a (mon.0) 727 : cluster [INF] Active manager daemon y restarted 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: cluster 2026-03-09T21:16:41.687638+0000 mon.a (mon.0) 728 : cluster [INF] Activating manager daemon y 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: cluster 2026-03-09T21:16:41.687638+0000 mon.a (mon.0) 728 : cluster [INF] Activating manager daemon y 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.713812+0000 mon.c (mon.2) 16 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.713812+0000 mon.c (mon.2) 16 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.713922+0000 mon.c (mon.2) 17 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.713922+0000 mon.c (mon.2) 17 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.713975+0000 mon.c (mon.2) 18 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.713975+0000 mon.c (mon.2) 18 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: cluster 2026-03-09T21:16:41.714224+0000 mon.a (mon.0) 729 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: cluster 2026-03-09T21:16:41.714224+0000 mon.a (mon.0) 729 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: cluster 2026-03-09T21:16:41.714822+0000 mon.a (mon.0) 730 : cluster [DBG] mgrmap e19: y(active, starting, since 0.0275419s), standbys: x 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: cluster 2026-03-09T21:16:41.714822+0000 mon.a (mon.0) 730 : cluster [DBG] mgrmap e19: y(active, starting, since 0.0275419s), standbys: x 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.716357+0000 mon.c (mon.2) 19 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.716357+0000 mon.c (mon.2) 19 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.716552+0000 mon.c (mon.2) 20 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.716552+0000 mon.c (mon.2) 20 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.716683+0000 mon.c (mon.2) 21 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.716683+0000 mon.c (mon.2) 21 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.716828+0000 mon.c (mon.2) 22 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.716828+0000 mon.c (mon.2) 22 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.717281+0000 mon.c (mon.2) 23 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.717281+0000 mon.c (mon.2) 23 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.717415+0000 mon.c (mon.2) 24 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.717415+0000 mon.c (mon.2) 24 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.717777+0000 mon.c (mon.2) 25 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.717777+0000 mon.c (mon.2) 25 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.718214+0000 mon.c (mon.2) 26 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.718214+0000 mon.c (mon.2) 26 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.718496+0000 mon.c (mon.2) 27 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.718496+0000 mon.c (mon.2) 27 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.718788+0000 mon.c (mon.2) 28 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.718788+0000 mon.c (mon.2) 28 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.720029+0000 mon.c (mon.2) 29 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.720029+0000 mon.c (mon.2) 29 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.720320+0000 mon.c (mon.2) 30 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.720320+0000 mon.c (mon.2) 30 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.720732+0000 mon.c (mon.2) 31 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.720732+0000 mon.c (mon.2) 31 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: cluster 2026-03-09T21:16:41.729759+0000 mon.a (mon.0) 731 : cluster [INF] Manager daemon y is now available 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: cluster 2026-03-09T21:16:41.729759+0000 mon.a (mon.0) 731 : cluster [INF] Manager daemon y is now available 2026-03-09T21:16:43.117 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.749416+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:43.118 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.749416+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:43.118 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.764910+0000 mon.c (mon.2) 32 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:16:43.118 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.764910+0000 mon.c (mon.2) 32 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:16:43.118 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.768820+0000 mon.c (mon.2) 33 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:43.118 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.768820+0000 mon.c (mon.2) 33 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:16:43.118 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.771275+0000 mon.c (mon.2) 34 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T21:16:43.118 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.771275+0000 mon.c (mon.2) 34 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T21:16:43.118 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.771594+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T21:16:43.118 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.771594+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T21:16:43.118 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.826888+0000 mon.c (mon.2) 35 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T21:16:43.118 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.826888+0000 mon.c (mon.2) 35 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T21:16:43.118 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.827298+0000 mon.a (mon.0) 734 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T21:16:43.118 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:42 vm07 bash[28052]: audit 2026-03-09T21:16:41.827298+0000 mon.a (mon.0) 734 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T21:16:44.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:43 vm07 bash[20771]: cephadm 2026-03-09T21:16:42.718169+0000 mgr.y (mgr.24416) 2 : cephadm [INF] Saving service alertmanager spec with placement vm07=a;count:1 2026-03-09T21:16:44.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:43 vm07 bash[20771]: cephadm 2026-03-09T21:16:42.718169+0000 mgr.y (mgr.24416) 2 : cephadm [INF] Saving service alertmanager spec with placement vm07=a;count:1 2026-03-09T21:16:44.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:43 vm07 bash[20771]: cluster 2026-03-09T21:16:42.729399+0000 mon.a (mon.0) 735 : cluster [DBG] mgrmap e20: y(active, since 1.0421s), standbys: x 2026-03-09T21:16:44.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:43 vm07 bash[20771]: cluster 2026-03-09T21:16:42.729399+0000 mon.a (mon.0) 735 : cluster [DBG] mgrmap e20: y(active, since 1.0421s), standbys: x 2026-03-09T21:16:44.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:43 vm07 bash[20771]: audit 2026-03-09T21:16:42.740764+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:44.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:43 vm07 bash[20771]: audit 2026-03-09T21:16:42.740764+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:44.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:43 vm07 bash[20771]: cluster 2026-03-09T21:16:42.742673+0000 mgr.y (mgr.24416) 3 : cluster [DBG] pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:16:44.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:43 vm07 bash[20771]: cluster 2026-03-09T21:16:42.742673+0000 mgr.y (mgr.24416) 3 : cluster [DBG] pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:16:44.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:43 vm07 bash[28052]: cephadm 2026-03-09T21:16:42.718169+0000 mgr.y (mgr.24416) 2 : cephadm [INF] Saving service alertmanager spec with placement vm07=a;count:1 2026-03-09T21:16:44.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:43 vm07 bash[28052]: cephadm 2026-03-09T21:16:42.718169+0000 mgr.y (mgr.24416) 2 : cephadm [INF] Saving service alertmanager spec with placement vm07=a;count:1 2026-03-09T21:16:44.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:43 vm07 bash[28052]: cluster 2026-03-09T21:16:42.729399+0000 mon.a (mon.0) 735 : cluster [DBG] mgrmap e20: y(active, since 1.0421s), standbys: x 2026-03-09T21:16:44.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:43 vm07 bash[28052]: cluster 2026-03-09T21:16:42.729399+0000 mon.a (mon.0) 735 : cluster [DBG] mgrmap e20: y(active, since 1.0421s), standbys: x 2026-03-09T21:16:44.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:43 vm07 bash[28052]: audit 2026-03-09T21:16:42.740764+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:44.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:43 vm07 bash[28052]: audit 2026-03-09T21:16:42.740764+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:44.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:43 vm07 bash[28052]: cluster 2026-03-09T21:16:42.742673+0000 mgr.y (mgr.24416) 3 : cluster [DBG] pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:16:44.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:43 vm07 bash[28052]: cluster 2026-03-09T21:16:42.742673+0000 mgr.y (mgr.24416) 3 : cluster [DBG] pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:16:44.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:43 vm10 bash[23387]: cephadm 2026-03-09T21:16:42.718169+0000 mgr.y (mgr.24416) 2 : cephadm [INF] Saving service alertmanager spec with placement vm07=a;count:1 2026-03-09T21:16:44.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:43 vm10 bash[23387]: cephadm 2026-03-09T21:16:42.718169+0000 mgr.y (mgr.24416) 2 : cephadm [INF] Saving service alertmanager spec with placement vm07=a;count:1 2026-03-09T21:16:44.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:43 vm10 bash[23387]: cluster 2026-03-09T21:16:42.729399+0000 mon.a (mon.0) 735 : cluster [DBG] mgrmap e20: y(active, since 1.0421s), standbys: x 2026-03-09T21:16:44.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:43 vm10 bash[23387]: cluster 2026-03-09T21:16:42.729399+0000 mon.a (mon.0) 735 : cluster [DBG] mgrmap e20: y(active, since 1.0421s), standbys: x 2026-03-09T21:16:44.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:43 vm10 bash[23387]: audit 2026-03-09T21:16:42.740764+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:44.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:43 vm10 bash[23387]: audit 2026-03-09T21:16:42.740764+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:44.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:43 vm10 bash[23387]: cluster 2026-03-09T21:16:42.742673+0000 mgr.y (mgr.24416) 3 : cluster [DBG] pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:16:44.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:43 vm10 bash[23387]: cluster 2026-03-09T21:16:42.742673+0000 mgr.y (mgr.24416) 3 : cluster [DBG] pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:16:45.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:44 vm07 bash[20771]: cephadm 2026-03-09T21:16:43.290159+0000 mgr.y (mgr.24416) 4 : cephadm [INF] [09/Mar/2026:21:16:43] ENGINE Bus STARTING 2026-03-09T21:16:45.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:44 vm07 bash[20771]: cephadm 2026-03-09T21:16:43.290159+0000 mgr.y (mgr.24416) 4 : cephadm [INF] [09/Mar/2026:21:16:43] ENGINE Bus STARTING 2026-03-09T21:16:45.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:44 vm07 bash[20771]: cephadm 2026-03-09T21:16:43.392028+0000 mgr.y (mgr.24416) 5 : cephadm [INF] [09/Mar/2026:21:16:43] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T21:16:45.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:44 vm07 bash[20771]: cephadm 2026-03-09T21:16:43.392028+0000 mgr.y (mgr.24416) 5 : cephadm [INF] [09/Mar/2026:21:16:43] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T21:16:45.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:44 vm07 bash[20771]: cephadm 2026-03-09T21:16:43.505155+0000 mgr.y (mgr.24416) 6 : cephadm [INF] [09/Mar/2026:21:16:43] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T21:16:45.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:44 vm07 bash[20771]: cephadm 2026-03-09T21:16:43.505155+0000 mgr.y (mgr.24416) 6 : cephadm [INF] [09/Mar/2026:21:16:43] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T21:16:45.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:44 vm07 bash[20771]: cephadm 2026-03-09T21:16:43.505340+0000 mgr.y (mgr.24416) 7 : cephadm [INF] [09/Mar/2026:21:16:43] ENGINE Bus STARTED 2026-03-09T21:16:45.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:44 vm07 bash[20771]: cephadm 2026-03-09T21:16:43.505340+0000 mgr.y (mgr.24416) 7 : cephadm [INF] [09/Mar/2026:21:16:43] ENGINE Bus STARTED 2026-03-09T21:16:45.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:44 vm07 bash[20771]: cephadm 2026-03-09T21:16:43.505796+0000 mgr.y (mgr.24416) 8 : cephadm [INF] [09/Mar/2026:21:16:43] ENGINE Client ('192.168.123.107', 38522) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T21:16:45.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:44 vm07 bash[20771]: cephadm 2026-03-09T21:16:43.505796+0000 mgr.y (mgr.24416) 8 : cephadm [INF] [09/Mar/2026:21:16:43] ENGINE Client ('192.168.123.107', 38522) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T21:16:45.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:44 vm07 bash[20771]: cluster 2026-03-09T21:16:43.720033+0000 mgr.y (mgr.24416) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:16:45.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:44 vm07 bash[20771]: cluster 2026-03-09T21:16:43.720033+0000 mgr.y (mgr.24416) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:16:45.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:44 vm07 bash[20771]: cluster 2026-03-09T21:16:43.752784+0000 mon.a (mon.0) 737 : cluster [DBG] mgrmap e21: y(active, since 2s), standbys: x 2026-03-09T21:16:45.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:44 vm07 bash[20771]: cluster 2026-03-09T21:16:43.752784+0000 mon.a (mon.0) 737 : cluster [DBG] mgrmap e21: y(active, since 2s), standbys: x 2026-03-09T21:16:45.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:44 vm07 bash[28052]: cephadm 2026-03-09T21:16:43.290159+0000 mgr.y (mgr.24416) 4 : cephadm [INF] [09/Mar/2026:21:16:43] ENGINE Bus STARTING 2026-03-09T21:16:45.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:44 vm07 bash[28052]: cephadm 2026-03-09T21:16:43.290159+0000 mgr.y (mgr.24416) 4 : cephadm [INF] [09/Mar/2026:21:16:43] ENGINE Bus STARTING 2026-03-09T21:16:45.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:44 vm07 bash[28052]: cephadm 2026-03-09T21:16:43.392028+0000 mgr.y (mgr.24416) 5 : cephadm [INF] [09/Mar/2026:21:16:43] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T21:16:45.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:44 vm07 bash[28052]: cephadm 2026-03-09T21:16:43.392028+0000 mgr.y (mgr.24416) 5 : cephadm [INF] [09/Mar/2026:21:16:43] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T21:16:45.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:44 vm07 bash[28052]: cephadm 2026-03-09T21:16:43.505155+0000 mgr.y (mgr.24416) 6 : cephadm [INF] [09/Mar/2026:21:16:43] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T21:16:45.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:44 vm07 bash[28052]: cephadm 2026-03-09T21:16:43.505155+0000 mgr.y (mgr.24416) 6 : cephadm [INF] [09/Mar/2026:21:16:43] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T21:16:45.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:44 vm07 bash[28052]: cephadm 2026-03-09T21:16:43.505340+0000 mgr.y (mgr.24416) 7 : cephadm [INF] [09/Mar/2026:21:16:43] ENGINE Bus STARTED 2026-03-09T21:16:45.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:44 vm07 bash[28052]: cephadm 2026-03-09T21:16:43.505340+0000 mgr.y (mgr.24416) 7 : cephadm [INF] [09/Mar/2026:21:16:43] ENGINE Bus STARTED 2026-03-09T21:16:45.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:44 vm07 bash[28052]: cephadm 2026-03-09T21:16:43.505796+0000 mgr.y (mgr.24416) 8 : cephadm [INF] [09/Mar/2026:21:16:43] ENGINE Client ('192.168.123.107', 38522) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T21:16:45.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:44 vm07 bash[28052]: cephadm 2026-03-09T21:16:43.505796+0000 mgr.y (mgr.24416) 8 : cephadm [INF] [09/Mar/2026:21:16:43] ENGINE Client ('192.168.123.107', 38522) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T21:16:45.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:44 vm07 bash[28052]: cluster 2026-03-09T21:16:43.720033+0000 mgr.y (mgr.24416) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:16:45.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:44 vm07 bash[28052]: cluster 2026-03-09T21:16:43.720033+0000 mgr.y (mgr.24416) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:16:45.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:44 vm07 bash[28052]: cluster 2026-03-09T21:16:43.752784+0000 mon.a (mon.0) 737 : cluster [DBG] mgrmap e21: y(active, since 2s), standbys: x 2026-03-09T21:16:45.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:44 vm07 bash[28052]: cluster 2026-03-09T21:16:43.752784+0000 mon.a (mon.0) 737 : cluster [DBG] mgrmap e21: y(active, since 2s), standbys: x 2026-03-09T21:16:45.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:44 vm10 bash[23387]: cephadm 2026-03-09T21:16:43.290159+0000 mgr.y (mgr.24416) 4 : cephadm [INF] [09/Mar/2026:21:16:43] ENGINE Bus STARTING 2026-03-09T21:16:45.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:44 vm10 bash[23387]: cephadm 2026-03-09T21:16:43.290159+0000 mgr.y (mgr.24416) 4 : cephadm [INF] [09/Mar/2026:21:16:43] ENGINE Bus STARTING 2026-03-09T21:16:45.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:44 vm10 bash[23387]: cephadm 2026-03-09T21:16:43.392028+0000 mgr.y (mgr.24416) 5 : cephadm [INF] [09/Mar/2026:21:16:43] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T21:16:45.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:44 vm10 bash[23387]: cephadm 2026-03-09T21:16:43.392028+0000 mgr.y (mgr.24416) 5 : cephadm [INF] [09/Mar/2026:21:16:43] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T21:16:45.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:44 vm10 bash[23387]: cephadm 2026-03-09T21:16:43.505155+0000 mgr.y (mgr.24416) 6 : cephadm [INF] [09/Mar/2026:21:16:43] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T21:16:45.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:44 vm10 bash[23387]: cephadm 2026-03-09T21:16:43.505155+0000 mgr.y (mgr.24416) 6 : cephadm [INF] [09/Mar/2026:21:16:43] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T21:16:45.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:44 vm10 bash[23387]: cephadm 2026-03-09T21:16:43.505340+0000 mgr.y (mgr.24416) 7 : cephadm [INF] [09/Mar/2026:21:16:43] ENGINE Bus STARTED 2026-03-09T21:16:45.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:44 vm10 bash[23387]: cephadm 2026-03-09T21:16:43.505340+0000 mgr.y (mgr.24416) 7 : cephadm [INF] [09/Mar/2026:21:16:43] ENGINE Bus STARTED 2026-03-09T21:16:45.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:44 vm10 bash[23387]: cephadm 2026-03-09T21:16:43.505796+0000 mgr.y (mgr.24416) 8 : cephadm [INF] [09/Mar/2026:21:16:43] ENGINE Client ('192.168.123.107', 38522) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T21:16:45.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:44 vm10 bash[23387]: cephadm 2026-03-09T21:16:43.505796+0000 mgr.y (mgr.24416) 8 : cephadm [INF] [09/Mar/2026:21:16:43] ENGINE Client ('192.168.123.107', 38522) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T21:16:45.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:44 vm10 bash[23387]: cluster 2026-03-09T21:16:43.720033+0000 mgr.y (mgr.24416) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:16:45.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:44 vm10 bash[23387]: cluster 2026-03-09T21:16:43.720033+0000 mgr.y (mgr.24416) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:16:45.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:44 vm10 bash[23387]: cluster 2026-03-09T21:16:43.752784+0000 mon.a (mon.0) 737 : cluster [DBG] mgrmap e21: y(active, since 2s), standbys: x 2026-03-09T21:16:45.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:44 vm10 bash[23387]: cluster 2026-03-09T21:16:43.752784+0000 mon.a (mon.0) 737 : cluster [DBG] mgrmap e21: y(active, since 2s), standbys: x 2026-03-09T21:16:46.442 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:46 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:16:46.977 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.b/config 2026-03-09T21:16:47.069 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:46 vm10 bash[23387]: cluster 2026-03-09T21:16:45.720349+0000 mgr.y (mgr.24416) 10 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:16:47.069 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:46 vm10 bash[23387]: cluster 2026-03-09T21:16:45.720349+0000 mgr.y (mgr.24416) 10 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:16:47.070 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:46 vm10 bash[23387]: cluster 2026-03-09T21:16:45.777833+0000 mon.a (mon.0) 738 : cluster [DBG] mgrmap e22: y(active, since 4s), standbys: x 2026-03-09T21:16:47.070 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:46 vm10 bash[23387]: cluster 2026-03-09T21:16:45.777833+0000 mon.a (mon.0) 738 : cluster [DBG] mgrmap e22: y(active, since 4s), standbys: x 2026-03-09T21:16:47.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:46 vm07 bash[20771]: cluster 2026-03-09T21:16:45.720349+0000 mgr.y (mgr.24416) 10 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:16:47.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:46 vm07 bash[20771]: cluster 2026-03-09T21:16:45.720349+0000 mgr.y (mgr.24416) 10 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:16:47.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:46 vm07 bash[20771]: cluster 2026-03-09T21:16:45.777833+0000 mon.a (mon.0) 738 : cluster [DBG] mgrmap e22: y(active, since 4s), standbys: x 2026-03-09T21:16:47.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:46 vm07 bash[20771]: cluster 2026-03-09T21:16:45.777833+0000 mon.a (mon.0) 738 : cluster [DBG] mgrmap e22: y(active, since 4s), standbys: x 2026-03-09T21:16:47.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:46 vm07 bash[28052]: cluster 2026-03-09T21:16:45.720349+0000 mgr.y (mgr.24416) 10 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:16:47.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:46 vm07 bash[28052]: cluster 2026-03-09T21:16:45.720349+0000 mgr.y (mgr.24416) 10 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:16:47.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:46 vm07 bash[28052]: cluster 2026-03-09T21:16:45.777833+0000 mon.a (mon.0) 738 : cluster [DBG] mgrmap e22: y(active, since 4s), standbys: x 2026-03-09T21:16:47.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:46 vm07 bash[28052]: cluster 2026-03-09T21:16:45.777833+0000 mon.a (mon.0) 738 : cluster [DBG] mgrmap e22: y(active, since 4s), standbys: x 2026-03-09T21:16:47.524 INFO:teuthology.orchestra.run.vm10.stdout:Scheduled grafana update... 2026-03-09T21:16:47.625 DEBUG:teuthology.orchestra.run.vm10:grafana.a> sudo journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@grafana.a.service 2026-03-09T21:16:47.626 INFO:tasks.cephadm:Setting up client nodes... 2026-03-09T21:16:47.626 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-09T21:16:47.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:47 vm10 bash[23387]: audit 2026-03-09T21:16:46.161680+0000 mgr.y (mgr.24416) 11 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:16:47.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:47 vm10 bash[23387]: audit 2026-03-09T21:16:46.161680+0000 mgr.y (mgr.24416) 11 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:16:47.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:47 vm10 bash[23387]: audit 2026-03-09T21:16:47.513426+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:47.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:47 vm10 bash[23387]: audit 2026-03-09T21:16:47.513426+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:47.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:47 vm10 bash[23387]: audit 2026-03-09T21:16:47.522699+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:47.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:47 vm10 bash[23387]: audit 2026-03-09T21:16:47.522699+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:47.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:47 vm10 bash[23387]: audit 2026-03-09T21:16:47.530200+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:47.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:47 vm10 bash[23387]: audit 2026-03-09T21:16:47.530200+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:47.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:47 vm10 bash[23387]: audit 2026-03-09T21:16:47.538666+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:47.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:47 vm10 bash[23387]: audit 2026-03-09T21:16:47.538666+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:47.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:47 vm10 bash[23387]: audit 2026-03-09T21:16:47.549744+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:47.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:47 vm10 bash[23387]: audit 2026-03-09T21:16:47.549744+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:47 vm07 bash[20771]: audit 2026-03-09T21:16:46.161680+0000 mgr.y (mgr.24416) 11 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:16:48.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:47 vm07 bash[20771]: audit 2026-03-09T21:16:46.161680+0000 mgr.y (mgr.24416) 11 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:16:48.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:47 vm07 bash[20771]: audit 2026-03-09T21:16:47.513426+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:47 vm07 bash[20771]: audit 2026-03-09T21:16:47.513426+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:47 vm07 bash[20771]: audit 2026-03-09T21:16:47.522699+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:47 vm07 bash[20771]: audit 2026-03-09T21:16:47.522699+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:47 vm07 bash[20771]: audit 2026-03-09T21:16:47.530200+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:47 vm07 bash[20771]: audit 2026-03-09T21:16:47.530200+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:47 vm07 bash[20771]: audit 2026-03-09T21:16:47.538666+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:47 vm07 bash[20771]: audit 2026-03-09T21:16:47.538666+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:47 vm07 bash[20771]: audit 2026-03-09T21:16:47.549744+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:47 vm07 bash[20771]: audit 2026-03-09T21:16:47.549744+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:47 vm07 bash[28052]: audit 2026-03-09T21:16:46.161680+0000 mgr.y (mgr.24416) 11 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:16:48.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:47 vm07 bash[28052]: audit 2026-03-09T21:16:46.161680+0000 mgr.y (mgr.24416) 11 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:16:48.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:47 vm07 bash[28052]: audit 2026-03-09T21:16:47.513426+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:47 vm07 bash[28052]: audit 2026-03-09T21:16:47.513426+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:47 vm07 bash[28052]: audit 2026-03-09T21:16:47.522699+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:47 vm07 bash[28052]: audit 2026-03-09T21:16:47.522699+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:47 vm07 bash[28052]: audit 2026-03-09T21:16:47.530200+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:47 vm07 bash[28052]: audit 2026-03-09T21:16:47.530200+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:47 vm07 bash[28052]: audit 2026-03-09T21:16:47.538666+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:47 vm07 bash[28052]: audit 2026-03-09T21:16:47.538666+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:47 vm07 bash[28052]: audit 2026-03-09T21:16:47.549744+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:47 vm07 bash[28052]: audit 2026-03-09T21:16:47.549744+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:47.367216+0000 mgr.y (mgr.24416) 12 : audit [DBG] from='client.24445 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm10=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:16:48.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:47.367216+0000 mgr.y (mgr.24416) 12 : audit [DBG] from='client.24445 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm10=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:16:48.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: cephadm 2026-03-09T21:16:47.368383+0000 mgr.y (mgr.24416) 13 : cephadm [INF] Saving service grafana spec with placement vm10=a;count:1 2026-03-09T21:16:48.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: cephadm 2026-03-09T21:16:47.368383+0000 mgr.y (mgr.24416) 13 : cephadm [INF] Saving service grafana spec with placement vm10=a;count:1 2026-03-09T21:16:48.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: cluster 2026-03-09T21:16:47.720756+0000 mgr.y (mgr.24416) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:16:48.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: cluster 2026-03-09T21:16:47.720756+0000 mgr.y (mgr.24416) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:16:48.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:48.237966+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:48.237966+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:48.247267+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:48.247267+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:48.249382+0000 mon.c (mon.2) 36 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:16:48.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:48.249382+0000 mon.c (mon.2) 36 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:16:48.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:48.249737+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:16:48.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:48.249737+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:16:48.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:48.284172+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:48.284172+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:48.292259+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:48.292259+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:48.293956+0000 mon.c (mon.2) 37 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:16:48.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:48.293956+0000 mon.c (mon.2) 37 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:16:48.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:48.294451+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:16:48.917 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:48 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:16:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:16:48.917 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:48 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:47.367216+0000 mgr.y (mgr.24416) 12 : audit [DBG] from='client.24445 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm10=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:47.367216+0000 mgr.y (mgr.24416) 12 : audit [DBG] from='client.24445 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm10=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: cephadm 2026-03-09T21:16:47.368383+0000 mgr.y (mgr.24416) 13 : cephadm [INF] Saving service grafana spec with placement vm10=a;count:1 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: cephadm 2026-03-09T21:16:47.368383+0000 mgr.y (mgr.24416) 13 : cephadm [INF] Saving service grafana spec with placement vm10=a;count:1 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: cluster 2026-03-09T21:16:47.720756+0000 mgr.y (mgr.24416) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: cluster 2026-03-09T21:16:47.720756+0000 mgr.y (mgr.24416) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:48.237966+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:48.237966+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:48.247267+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:48.247267+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:48.249382+0000 mon.c (mon.2) 36 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:48.249382+0000 mon.c (mon.2) 36 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:48.249737+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:48.249737+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:48.284172+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:48.284172+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:48.292259+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:48.292259+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:48.293956+0000 mon.c (mon.2) 37 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:48.293956+0000 mon.c (mon.2) 37 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:48.294451+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:48.294451+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:48.295572+0000 mon.c (mon.2) 38 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:48.295572+0000 mon.c (mon.2) 38 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:48.296563+0000 mon.c (mon.2) 39 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:48.296563+0000 mon.c (mon.2) 39 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:48.498939+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:48.498939+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:48.505709+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:48.505709+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:48.563464+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:48.563464+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:48.579139+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:48.579139+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:48.611518+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:48 vm07 bash[28052]: audit 2026-03-09T21:16:48.611518+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.919 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:48.294451+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:16:48.919 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:48.295572+0000 mon.c (mon.2) 38 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:48.919 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:48.295572+0000 mon.c (mon.2) 38 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:48.919 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:48.296563+0000 mon.c (mon.2) 39 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:48.919 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:48.296563+0000 mon.c (mon.2) 39 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:48.919 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:48.498939+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.919 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:48.498939+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.919 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:48.505709+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.919 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:48.505709+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.919 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:48.563464+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.919 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:48.563464+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.919 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:48.579139+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.919 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:48.579139+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.919 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:48.611518+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:48.919 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:48 vm07 bash[20771]: audit 2026-03-09T21:16:48.611518+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:49.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:47.367216+0000 mgr.y (mgr.24416) 12 : audit [DBG] from='client.24445 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm10=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:16:49.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:47.367216+0000 mgr.y (mgr.24416) 12 : audit [DBG] from='client.24445 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm10=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T21:16:49.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: cephadm 2026-03-09T21:16:47.368383+0000 mgr.y (mgr.24416) 13 : cephadm [INF] Saving service grafana spec with placement vm10=a;count:1 2026-03-09T21:16:49.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: cephadm 2026-03-09T21:16:47.368383+0000 mgr.y (mgr.24416) 13 : cephadm [INF] Saving service grafana spec with placement vm10=a;count:1 2026-03-09T21:16:49.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: cluster 2026-03-09T21:16:47.720756+0000 mgr.y (mgr.24416) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:16:49.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: cluster 2026-03-09T21:16:47.720756+0000 mgr.y (mgr.24416) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:16:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:48.237966+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:48.237966+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:48.247267+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:48.247267+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:48.249382+0000 mon.c (mon.2) 36 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:16:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:48.249382+0000 mon.c (mon.2) 36 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:16:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:48.249737+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:16:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:48.249737+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:16:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:48.284172+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:48.284172+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:48.292259+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:48.292259+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:48.293956+0000 mon.c (mon.2) 37 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:16:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:48.293956+0000 mon.c (mon.2) 37 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:16:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:48.294451+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:16:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:48.294451+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T21:16:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:48.295572+0000 mon.c (mon.2) 38 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:48.295572+0000 mon.c (mon.2) 38 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:16:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:48.296563+0000 mon.c (mon.2) 39 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:48.296563+0000 mon.c (mon.2) 39 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:16:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:48.498939+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:48.498939+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:48.505709+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:48.505709+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:48.563464+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:48.563464+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:48.579139+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:48.579139+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:48.611518+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:48 vm10 bash[23387]: audit 2026-03-09T21:16:48.611518+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:49.291 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:49 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:49.291 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:49 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:49.291 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:49 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:49.291 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:16:49 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:49.291 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:16:49 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:49.291 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 21:16:49 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:49.292 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 21:16:49 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:49.292 INFO:journalctl@ceph.rgw.foo.a.vm07.stdout:Mar 09 21:16:49 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:49.292 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:49 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:49.292 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:49 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:49.292 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:49 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:49.615 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:16:49 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:49.615 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:49 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:49.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:49 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:49.615 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 21:16:49 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:49.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:49 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:49.615 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 21:16:49 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:49.615 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:49 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:49.615 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:49 vm07 systemd[1]: Started Ceph node-exporter.a for 22c897f4-1bfc-11f1-adaa-13127443f8b3. 2026-03-09T21:16:49.615 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:49 vm07 bash[54814]: Unable to find image 'quay.io/prometheus/node-exporter:v1.7.0' locally 2026-03-09T21:16:49.616 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:16:49 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:49.616 INFO:journalctl@ceph.rgw.foo.a.vm07.stdout:Mar 09 21:16:49 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:49.992 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:49 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:49.993 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:16:49 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:49.993 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:16:49 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:49.993 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 09 21:16:49 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:49.993 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:16:49 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:49.993 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:16:49 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:49.993 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:49 vm10 bash[23387]: cephadm 2026-03-09T21:16:48.297806+0000 mgr.y (mgr.24416) 15 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-09T21:16:49.993 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:49 vm10 bash[23387]: cephadm 2026-03-09T21:16:48.297806+0000 mgr.y (mgr.24416) 15 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-09T21:16:49.993 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:49 vm10 bash[23387]: cephadm 2026-03-09T21:16:48.297943+0000 mgr.y (mgr.24416) 16 : cephadm [INF] Updating vm10:/etc/ceph/ceph.conf 2026-03-09T21:16:49.993 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:49 vm10 bash[23387]: cephadm 2026-03-09T21:16:48.297943+0000 mgr.y (mgr.24416) 16 : cephadm [INF] Updating vm10:/etc/ceph/ceph.conf 2026-03-09T21:16:49.993 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:49 vm10 bash[23387]: cephadm 2026-03-09T21:16:48.349823+0000 mgr.y (mgr.24416) 17 : cephadm [INF] Updating vm07:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:16:49.993 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:49 vm10 bash[23387]: cephadm 2026-03-09T21:16:48.349823+0000 mgr.y (mgr.24416) 17 : cephadm [INF] Updating vm07:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:16:49.993 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:49 vm10 bash[23387]: cephadm 2026-03-09T21:16:48.349955+0000 mgr.y (mgr.24416) 18 : cephadm [INF] Updating vm10:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:16:49.993 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:49 vm10 bash[23387]: cephadm 2026-03-09T21:16:48.349955+0000 mgr.y (mgr.24416) 18 : cephadm [INF] Updating vm10:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:16:49.993 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:49 vm10 bash[23387]: cephadm 2026-03-09T21:16:48.391650+0000 mgr.y (mgr.24416) 19 : cephadm [INF] Updating vm10:/etc/ceph/ceph.client.admin.keyring 2026-03-09T21:16:49.993 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:49 vm10 bash[23387]: cephadm 2026-03-09T21:16:48.391650+0000 mgr.y (mgr.24416) 19 : cephadm [INF] Updating vm10:/etc/ceph/ceph.client.admin.keyring 2026-03-09T21:16:49.994 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:49 vm10 bash[23387]: cephadm 2026-03-09T21:16:48.395304+0000 mgr.y (mgr.24416) 20 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-09T21:16:49.994 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:49 vm10 bash[23387]: cephadm 2026-03-09T21:16:48.395304+0000 mgr.y (mgr.24416) 20 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-09T21:16:49.994 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:49 vm10 bash[23387]: cephadm 2026-03-09T21:16:48.441757+0000 mgr.y (mgr.24416) 21 : cephadm [INF] Updating vm10:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.client.admin.keyring 2026-03-09T21:16:49.994 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:49 vm10 bash[23387]: cephadm 2026-03-09T21:16:48.441757+0000 mgr.y (mgr.24416) 21 : cephadm [INF] Updating vm10:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.client.admin.keyring 2026-03-09T21:16:49.994 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:49 vm10 bash[23387]: cephadm 2026-03-09T21:16:48.443413+0000 mgr.y (mgr.24416) 22 : cephadm [INF] Updating vm07:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.client.admin.keyring 2026-03-09T21:16:49.994 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:49 vm10 bash[23387]: cephadm 2026-03-09T21:16:48.443413+0000 mgr.y (mgr.24416) 22 : cephadm [INF] Updating vm07:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.client.admin.keyring 2026-03-09T21:16:49.994 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:49 vm10 bash[23387]: cephadm 2026-03-09T21:16:48.613817+0000 mgr.y (mgr.24416) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm07 2026-03-09T21:16:49.994 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:49 vm10 bash[23387]: cephadm 2026-03-09T21:16:48.613817+0000 mgr.y (mgr.24416) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm07 2026-03-09T21:16:49.994 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:49 vm10 bash[23387]: audit 2026-03-09T21:16:49.423272+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:49.994 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:49 vm10 bash[23387]: audit 2026-03-09T21:16:49.423272+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:49.994 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:49 vm10 bash[23387]: audit 2026-03-09T21:16:49.432422+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:49.994 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:49 vm10 bash[23387]: audit 2026-03-09T21:16:49.432422+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:49.994 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:49 vm10 bash[23387]: audit 2026-03-09T21:16:49.441544+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:49.994 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:49 vm10 bash[23387]: audit 2026-03-09T21:16:49.441544+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:49.994 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:49 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:49.994 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:49 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:49 vm07 bash[28052]: cephadm 2026-03-09T21:16:48.297806+0000 mgr.y (mgr.24416) 15 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-09T21:16:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:49 vm07 bash[28052]: cephadm 2026-03-09T21:16:48.297806+0000 mgr.y (mgr.24416) 15 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-09T21:16:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:49 vm07 bash[28052]: cephadm 2026-03-09T21:16:48.297943+0000 mgr.y (mgr.24416) 16 : cephadm [INF] Updating vm10:/etc/ceph/ceph.conf 2026-03-09T21:16:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:49 vm07 bash[28052]: cephadm 2026-03-09T21:16:48.297943+0000 mgr.y (mgr.24416) 16 : cephadm [INF] Updating vm10:/etc/ceph/ceph.conf 2026-03-09T21:16:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:49 vm07 bash[28052]: cephadm 2026-03-09T21:16:48.349823+0000 mgr.y (mgr.24416) 17 : cephadm [INF] Updating vm07:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:16:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:49 vm07 bash[28052]: cephadm 2026-03-09T21:16:48.349823+0000 mgr.y (mgr.24416) 17 : cephadm [INF] Updating vm07:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:16:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:49 vm07 bash[28052]: cephadm 2026-03-09T21:16:48.349955+0000 mgr.y (mgr.24416) 18 : cephadm [INF] Updating vm10:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:16:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:49 vm07 bash[28052]: cephadm 2026-03-09T21:16:48.349955+0000 mgr.y (mgr.24416) 18 : cephadm [INF] Updating vm10:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:16:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:49 vm07 bash[28052]: cephadm 2026-03-09T21:16:48.391650+0000 mgr.y (mgr.24416) 19 : cephadm [INF] Updating vm10:/etc/ceph/ceph.client.admin.keyring 2026-03-09T21:16:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:49 vm07 bash[28052]: cephadm 2026-03-09T21:16:48.391650+0000 mgr.y (mgr.24416) 19 : cephadm [INF] Updating vm10:/etc/ceph/ceph.client.admin.keyring 2026-03-09T21:16:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:49 vm07 bash[28052]: cephadm 2026-03-09T21:16:48.395304+0000 mgr.y (mgr.24416) 20 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-09T21:16:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:49 vm07 bash[28052]: cephadm 2026-03-09T21:16:48.395304+0000 mgr.y (mgr.24416) 20 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-09T21:16:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:49 vm07 bash[28052]: cephadm 2026-03-09T21:16:48.441757+0000 mgr.y (mgr.24416) 21 : cephadm [INF] Updating vm10:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.client.admin.keyring 2026-03-09T21:16:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:49 vm07 bash[28052]: cephadm 2026-03-09T21:16:48.441757+0000 mgr.y (mgr.24416) 21 : cephadm [INF] Updating vm10:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.client.admin.keyring 2026-03-09T21:16:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:49 vm07 bash[28052]: cephadm 2026-03-09T21:16:48.443413+0000 mgr.y (mgr.24416) 22 : cephadm [INF] Updating vm07:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.client.admin.keyring 2026-03-09T21:16:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:49 vm07 bash[28052]: cephadm 2026-03-09T21:16:48.443413+0000 mgr.y (mgr.24416) 22 : cephadm [INF] Updating vm07:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.client.admin.keyring 2026-03-09T21:16:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:49 vm07 bash[28052]: cephadm 2026-03-09T21:16:48.613817+0000 mgr.y (mgr.24416) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm07 2026-03-09T21:16:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:49 vm07 bash[28052]: cephadm 2026-03-09T21:16:48.613817+0000 mgr.y (mgr.24416) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm07 2026-03-09T21:16:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:49 vm07 bash[28052]: audit 2026-03-09T21:16:49.423272+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:49 vm07 bash[28052]: audit 2026-03-09T21:16:49.423272+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:49 vm07 bash[28052]: audit 2026-03-09T21:16:49.432422+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:49 vm07 bash[28052]: audit 2026-03-09T21:16:49.432422+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:49 vm07 bash[28052]: audit 2026-03-09T21:16:49.441544+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:49 vm07 bash[28052]: audit 2026-03-09T21:16:49.441544+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:49 vm07 bash[20771]: cephadm 2026-03-09T21:16:48.297806+0000 mgr.y (mgr.24416) 15 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-09T21:16:50.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:49 vm07 bash[20771]: cephadm 2026-03-09T21:16:48.297806+0000 mgr.y (mgr.24416) 15 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-09T21:16:50.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:49 vm07 bash[20771]: cephadm 2026-03-09T21:16:48.297943+0000 mgr.y (mgr.24416) 16 : cephadm [INF] Updating vm10:/etc/ceph/ceph.conf 2026-03-09T21:16:50.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:49 vm07 bash[20771]: cephadm 2026-03-09T21:16:48.297943+0000 mgr.y (mgr.24416) 16 : cephadm [INF] Updating vm10:/etc/ceph/ceph.conf 2026-03-09T21:16:50.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:49 vm07 bash[20771]: cephadm 2026-03-09T21:16:48.349823+0000 mgr.y (mgr.24416) 17 : cephadm [INF] Updating vm07:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:16:50.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:49 vm07 bash[20771]: cephadm 2026-03-09T21:16:48.349823+0000 mgr.y (mgr.24416) 17 : cephadm [INF] Updating vm07:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:16:50.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:49 vm07 bash[20771]: cephadm 2026-03-09T21:16:48.349955+0000 mgr.y (mgr.24416) 18 : cephadm [INF] Updating vm10:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:16:50.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:49 vm07 bash[20771]: cephadm 2026-03-09T21:16:48.349955+0000 mgr.y (mgr.24416) 18 : cephadm [INF] Updating vm10:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.conf 2026-03-09T21:16:50.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:49 vm07 bash[20771]: cephadm 2026-03-09T21:16:48.391650+0000 mgr.y (mgr.24416) 19 : cephadm [INF] Updating vm10:/etc/ceph/ceph.client.admin.keyring 2026-03-09T21:16:50.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:49 vm07 bash[20771]: cephadm 2026-03-09T21:16:48.391650+0000 mgr.y (mgr.24416) 19 : cephadm [INF] Updating vm10:/etc/ceph/ceph.client.admin.keyring 2026-03-09T21:16:50.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:49 vm07 bash[20771]: cephadm 2026-03-09T21:16:48.395304+0000 mgr.y (mgr.24416) 20 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-09T21:16:50.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:49 vm07 bash[20771]: cephadm 2026-03-09T21:16:48.395304+0000 mgr.y (mgr.24416) 20 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-09T21:16:50.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:49 vm07 bash[20771]: cephadm 2026-03-09T21:16:48.441757+0000 mgr.y (mgr.24416) 21 : cephadm [INF] Updating vm10:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.client.admin.keyring 2026-03-09T21:16:50.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:49 vm07 bash[20771]: cephadm 2026-03-09T21:16:48.441757+0000 mgr.y (mgr.24416) 21 : cephadm [INF] Updating vm10:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.client.admin.keyring 2026-03-09T21:16:50.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:49 vm07 bash[20771]: cephadm 2026-03-09T21:16:48.443413+0000 mgr.y (mgr.24416) 22 : cephadm [INF] Updating vm07:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.client.admin.keyring 2026-03-09T21:16:50.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:49 vm07 bash[20771]: cephadm 2026-03-09T21:16:48.443413+0000 mgr.y (mgr.24416) 22 : cephadm [INF] Updating vm07:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/config/ceph.client.admin.keyring 2026-03-09T21:16:50.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:49 vm07 bash[20771]: cephadm 2026-03-09T21:16:48.613817+0000 mgr.y (mgr.24416) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm07 2026-03-09T21:16:50.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:49 vm07 bash[20771]: cephadm 2026-03-09T21:16:48.613817+0000 mgr.y (mgr.24416) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm07 2026-03-09T21:16:50.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:49 vm07 bash[20771]: audit 2026-03-09T21:16:49.423272+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:49 vm07 bash[20771]: audit 2026-03-09T21:16:49.423272+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:49 vm07 bash[20771]: audit 2026-03-09T21:16:49.432422+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:49 vm07 bash[20771]: audit 2026-03-09T21:16:49.432422+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:49 vm07 bash[20771]: audit 2026-03-09T21:16:49.441544+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:49 vm07 bash[20771]: audit 2026-03-09T21:16:49.441544+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.246 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:50 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:50.247 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:16:50 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:50.247 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:16:50 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:50.247 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:16:50 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:50.247 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 09 21:16:50 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:50.247 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:16:50 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:50.247 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:50 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:50.247 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:16:50 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:50.692 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:50 vm10 systemd[1]: Started Ceph node-exporter.b for 22c897f4-1bfc-11f1-adaa-13127443f8b3. 2026-03-09T21:16:50.692 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:50 vm10 bash[50706]: Unable to find image 'quay.io/prometheus/node-exporter:v1.7.0' locally 2026-03-09T21:16:50.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:50 vm07 bash[28052]: cephadm 2026-03-09T21:16:49.444805+0000 mgr.y (mgr.24416) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm10 2026-03-09T21:16:50.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:50 vm07 bash[28052]: cephadm 2026-03-09T21:16:49.444805+0000 mgr.y (mgr.24416) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm10 2026-03-09T21:16:50.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:50 vm07 bash[28052]: cluster 2026-03-09T21:16:49.721263+0000 mgr.y (mgr.24416) 25 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T21:16:50.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:50 vm07 bash[28052]: cluster 2026-03-09T21:16:49.721263+0000 mgr.y (mgr.24416) 25 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T21:16:50.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:50 vm07 bash[28052]: audit 2026-03-09T21:16:50.286922+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:50 vm07 bash[28052]: audit 2026-03-09T21:16:50.286922+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:50 vm07 bash[28052]: audit 2026-03-09T21:16:50.299034+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:50 vm07 bash[28052]: audit 2026-03-09T21:16:50.299034+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:50 vm07 bash[28052]: audit 2026-03-09T21:16:50.306096+0000 mon.a (mon.0) 760 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:50 vm07 bash[28052]: audit 2026-03-09T21:16:50.306096+0000 mon.a (mon.0) 760 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:50 vm07 bash[28052]: audit 2026-03-09T21:16:50.312322+0000 mon.a (mon.0) 761 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:50 vm07 bash[28052]: audit 2026-03-09T21:16:50.312322+0000 mon.a (mon.0) 761 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:50 vm07 bash[28052]: audit 2026-03-09T21:16:50.318512+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:50 vm07 bash[28052]: audit 2026-03-09T21:16:50.318512+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:50 vm07 bash[20771]: cephadm 2026-03-09T21:16:49.444805+0000 mgr.y (mgr.24416) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm10 2026-03-09T21:16:50.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:50 vm07 bash[20771]: cephadm 2026-03-09T21:16:49.444805+0000 mgr.y (mgr.24416) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm10 2026-03-09T21:16:50.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:50 vm07 bash[20771]: cluster 2026-03-09T21:16:49.721263+0000 mgr.y (mgr.24416) 25 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T21:16:50.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:50 vm07 bash[20771]: cluster 2026-03-09T21:16:49.721263+0000 mgr.y (mgr.24416) 25 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T21:16:50.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:50 vm07 bash[20771]: audit 2026-03-09T21:16:50.286922+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:50 vm07 bash[20771]: audit 2026-03-09T21:16:50.286922+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:50 vm07 bash[20771]: audit 2026-03-09T21:16:50.299034+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:50 vm07 bash[20771]: audit 2026-03-09T21:16:50.299034+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:50 vm07 bash[20771]: audit 2026-03-09T21:16:50.306096+0000 mon.a (mon.0) 760 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:50 vm07 bash[20771]: audit 2026-03-09T21:16:50.306096+0000 mon.a (mon.0) 760 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:50 vm07 bash[20771]: audit 2026-03-09T21:16:50.312322+0000 mon.a (mon.0) 761 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:50 vm07 bash[20771]: audit 2026-03-09T21:16:50.312322+0000 mon.a (mon.0) 761 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:50 vm07 bash[20771]: audit 2026-03-09T21:16:50.318512+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.866 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:50 vm07 bash[20771]: audit 2026-03-09T21:16:50.318512+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:50.866 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:16:50 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:50 vm10 bash[23387]: cephadm 2026-03-09T21:16:49.444805+0000 mgr.y (mgr.24416) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm10 2026-03-09T21:16:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:50 vm10 bash[23387]: cephadm 2026-03-09T21:16:49.444805+0000 mgr.y (mgr.24416) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm10 2026-03-09T21:16:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:50 vm10 bash[23387]: cluster 2026-03-09T21:16:49.721263+0000 mgr.y (mgr.24416) 25 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T21:16:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:50 vm10 bash[23387]: cluster 2026-03-09T21:16:49.721263+0000 mgr.y (mgr.24416) 25 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T21:16:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:50 vm10 bash[23387]: audit 2026-03-09T21:16:50.286922+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:50 vm10 bash[23387]: audit 2026-03-09T21:16:50.286922+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:50 vm10 bash[23387]: audit 2026-03-09T21:16:50.299034+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:50 vm10 bash[23387]: audit 2026-03-09T21:16:50.299034+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:50 vm10 bash[23387]: audit 2026-03-09T21:16:50.306096+0000 mon.a (mon.0) 760 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:50 vm10 bash[23387]: audit 2026-03-09T21:16:50.306096+0000 mon.a (mon.0) 760 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:50 vm10 bash[23387]: audit 2026-03-09T21:16:50.312322+0000 mon.a (mon.0) 761 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:50 vm10 bash[23387]: audit 2026-03-09T21:16:50.312322+0000 mon.a (mon.0) 761 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:50 vm10 bash[23387]: audit 2026-03-09T21:16:50.318512+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:50 vm10 bash[23387]: audit 2026-03-09T21:16:50.318512+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:51.365 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:50 vm07 bash[54814]: v1.7.0: Pulling from prometheus/node-exporter 2026-03-09T21:16:51.793 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:51 vm07 bash[54814]: 2abcce694348: Pulling fs layer 2026-03-09T21:16:51.794 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:51 vm07 bash[54814]: 455fd88e5221: Pulling fs layer 2026-03-09T21:16:51.794 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:51 vm07 bash[54814]: 324153f2810a: Pulling fs layer 2026-03-09T21:16:51.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:51 vm10 bash[23387]: cephadm 2026-03-09T21:16:50.324316+0000 mgr.y (mgr.24416) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm07 2026-03-09T21:16:51.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:51 vm10 bash[23387]: cephadm 2026-03-09T21:16:50.324316+0000 mgr.y (mgr.24416) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm07 2026-03-09T21:16:51.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:51 vm10 bash[23387]: audit 2026-03-09T21:16:51.762655+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:51.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:51 vm10 bash[23387]: audit 2026-03-09T21:16:51.762655+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:51.942 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:51 vm10 bash[50706]: v1.7.0: Pulling from prometheus/node-exporter 2026-03-09T21:16:52.083 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:51 vm07 bash[20771]: cephadm 2026-03-09T21:16:50.324316+0000 mgr.y (mgr.24416) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm07 2026-03-09T21:16:52.083 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:51 vm07 bash[20771]: cephadm 2026-03-09T21:16:50.324316+0000 mgr.y (mgr.24416) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm07 2026-03-09T21:16:52.083 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:51 vm07 bash[20771]: audit 2026-03-09T21:16:51.762655+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:52.083 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:51 vm07 bash[20771]: audit 2026-03-09T21:16:51.762655+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:52.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:51 vm07 bash[28052]: cephadm 2026-03-09T21:16:50.324316+0000 mgr.y (mgr.24416) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm07 2026-03-09T21:16:52.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:51 vm07 bash[28052]: cephadm 2026-03-09T21:16:50.324316+0000 mgr.y (mgr.24416) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm07 2026-03-09T21:16:52.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:51 vm07 bash[28052]: audit 2026-03-09T21:16:51.762655+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:52.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:51 vm07 bash[28052]: audit 2026-03-09T21:16:51.762655+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:52.083 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:51 vm07 bash[54814]: 2abcce694348: Verifying Checksum 2026-03-09T21:16:52.083 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:51 vm07 bash[54814]: 2abcce694348: Download complete 2026-03-09T21:16:52.083 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:51 vm07 bash[54814]: 455fd88e5221: Verifying Checksum 2026-03-09T21:16:52.083 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:51 vm07 bash[54814]: 455fd88e5221: Download complete 2026-03-09T21:16:52.083 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:51 vm07 bash[54814]: 2abcce694348: Pull complete 2026-03-09T21:16:52.295 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:16:52.335 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: 324153f2810a: Verifying Checksum 2026-03-09T21:16:52.335 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: 324153f2810a: Download complete 2026-03-09T21:16:52.335 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: 455fd88e5221: Pull complete 2026-03-09T21:16:52.335 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: 324153f2810a: Pull complete 2026-03-09T21:16:52.336 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: Digest: sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80 2026-03-09T21:16:52.336 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: Status: Downloaded newer image for quay.io/prometheus/node-exporter:v1.7.0 2026-03-09T21:16:52.442 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:52 vm10 bash[50706]: 2abcce694348: Pulling fs layer 2026-03-09T21:16:52.442 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:52 vm10 bash[50706]: 455fd88e5221: Pulling fs layer 2026-03-09T21:16:52.442 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:52 vm10 bash[50706]: 324153f2810a: Pulling fs layer 2026-03-09T21:16:52.615 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.383Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-09T21:16:52.615 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.383Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-09T21:16:52.615 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.383Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-09T21:16:52.615 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.383Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-09T21:16:52.615 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-09T21:16:52.615 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-09T21:16:52.615 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-09T21:16:52.615 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=arp 2026-03-09T21:16:52.615 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-09T21:16:52.615 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-09T21:16:52.615 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-09T21:16:52.615 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-09T21:16:52.615 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-09T21:16:52.615 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-09T21:16:52.615 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-09T21:16:52.615 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-09T21:16:52.615 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=edac 2026-03-09T21:16:52.615 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-09T21:16:52.615 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-09T21:16:52.615 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-09T21:16:52.615 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-09T21:16:52.615 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-09T21:16:52.615 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-09T21:16:52.616 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-09T21:16:52.616 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-09T21:16:52.616 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-09T21:16:52.616 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-09T21:16:52.616 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-09T21:16:52.616 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-09T21:16:52.616 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-09T21:16:52.616 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-09T21:16:52.616 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-09T21:16:52.616 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-09T21:16:52.616 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=os 2026-03-09T21:16:52.616 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-09T21:16:52.616 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-09T21:16:52.616 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-09T21:16:52.616 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-09T21:16:52.616 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-09T21:16:52.616 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-09T21:16:52.616 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-09T21:16:52.616 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=stat 2026-03-09T21:16:52.616 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-09T21:16:52.616 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-09T21:16:52.616 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-09T21:16:52.616 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=time 2026-03-09T21:16:52.616 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-09T21:16:52.616 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=uname 2026-03-09T21:16:52.616 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-09T21:16:52.616 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-09T21:16:52.616 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.384Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-09T21:16:52.616 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.385Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-09T21:16:52.616 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[54814]: ts=2026-03-09T21:16:52.385Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-09T21:16:52.683 INFO:teuthology.orchestra.run.vm07.stdout:[client.0] 2026-03-09T21:16:52.683 INFO:teuthology.orchestra.run.vm07.stdout: key = AQDEOK9pjXUyKBAAs7iJg7T6lZ51HkH5uA2yvA== 2026-03-09T21:16:52.770 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T21:16:52.770 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-09T21:16:52.770 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-09T21:16:52.789 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph auth get-or-create client.1 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-09T21:16:52.802 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:52 vm10 bash[50706]: 455fd88e5221: Verifying Checksum 2026-03-09T21:16:52.802 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:52 vm10 bash[50706]: 455fd88e5221: Download complete 2026-03-09T21:16:52.802 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:52 vm10 bash[50706]: 2abcce694348: Verifying Checksum 2026-03-09T21:16:52.802 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:52 vm10 bash[50706]: 2abcce694348: Download complete 2026-03-09T21:16:52.802 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:52 vm10 bash[50706]: 2abcce694348: Pull complete 2026-03-09T21:16:52.802 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:52 vm10 bash[50706]: 324153f2810a: Verifying Checksum 2026-03-09T21:16:52.802 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:52 vm10 bash[50706]: 324153f2810a: Download complete 2026-03-09T21:16:52.802 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:52 vm10 bash[50706]: 455fd88e5221: Pull complete 2026-03-09T21:16:53.070 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:52 vm10 bash[23387]: cluster 2026-03-09T21:16:51.721640+0000 mgr.y (mgr.24416) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T21:16:53.071 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:52 vm10 bash[23387]: cluster 2026-03-09T21:16:51.721640+0000 mgr.y (mgr.24416) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T21:16:53.071 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:52 vm10 bash[23387]: audit 2026-03-09T21:16:52.674126+0000 mon.a (mon.0) 764 : audit [INF] from='client.? 192.168.123.107:0/2387907816' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T21:16:53.071 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:52 vm10 bash[23387]: audit 2026-03-09T21:16:52.674126+0000 mon.a (mon.0) 764 : audit [INF] from='client.? 192.168.123.107:0/2387907816' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T21:16:53.071 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:52 vm10 bash[23387]: audit 2026-03-09T21:16:52.679130+0000 mon.a (mon.0) 765 : audit [INF] from='client.? 192.168.123.107:0/2387907816' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T21:16:53.071 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:52 vm10 bash[23387]: audit 2026-03-09T21:16:52.679130+0000 mon.a (mon.0) 765 : audit [INF] from='client.? 192.168.123.107:0/2387907816' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T21:16:53.071 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:52 vm10 bash[50706]: 324153f2810a: Pull complete 2026-03-09T21:16:53.071 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:52 vm10 bash[50706]: Digest: sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80 2026-03-09T21:16:53.071 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:52 vm10 bash[50706]: Status: Downloaded newer image for quay.io/prometheus/node-exporter:v1.7.0 2026-03-09T21:16:53.071 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.070Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-09T21:16:53.071 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.070Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-09T21:16:53.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[20771]: cluster 2026-03-09T21:16:51.721640+0000 mgr.y (mgr.24416) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T21:16:53.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[20771]: cluster 2026-03-09T21:16:51.721640+0000 mgr.y (mgr.24416) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T21:16:53.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[20771]: audit 2026-03-09T21:16:52.674126+0000 mon.a (mon.0) 764 : audit [INF] from='client.? 192.168.123.107:0/2387907816' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T21:16:53.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[20771]: audit 2026-03-09T21:16:52.674126+0000 mon.a (mon.0) 764 : audit [INF] from='client.? 192.168.123.107:0/2387907816' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T21:16:53.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[20771]: audit 2026-03-09T21:16:52.679130+0000 mon.a (mon.0) 765 : audit [INF] from='client.? 192.168.123.107:0/2387907816' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T21:16:53.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:52 vm07 bash[20771]: audit 2026-03-09T21:16:52.679130+0000 mon.a (mon.0) 765 : audit [INF] from='client.? 192.168.123.107:0/2387907816' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T21:16:53.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:52 vm07 bash[28052]: cluster 2026-03-09T21:16:51.721640+0000 mgr.y (mgr.24416) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T21:16:53.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:52 vm07 bash[28052]: cluster 2026-03-09T21:16:51.721640+0000 mgr.y (mgr.24416) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T21:16:53.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:52 vm07 bash[28052]: audit 2026-03-09T21:16:52.674126+0000 mon.a (mon.0) 764 : audit [INF] from='client.? 192.168.123.107:0/2387907816' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T21:16:53.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:52 vm07 bash[28052]: audit 2026-03-09T21:16:52.674126+0000 mon.a (mon.0) 764 : audit [INF] from='client.? 192.168.123.107:0/2387907816' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T21:16:53.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:52 vm07 bash[28052]: audit 2026-03-09T21:16:52.679130+0000 mon.a (mon.0) 765 : audit [INF] from='client.? 192.168.123.107:0/2387907816' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T21:16:53.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:52 vm07 bash[28052]: audit 2026-03-09T21:16:52.679130+0000 mon.a (mon.0) 765 : audit [INF] from='client.? 192.168.123.107:0/2387907816' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T21:16:53.442 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-09T21:16:53.442 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-09T21:16:53.442 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=arp 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=edac 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=os 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=stat 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=time 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-09T21:16:53.443 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=uname 2026-03-09T21:16:53.444 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-09T21:16:53.444 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-09T21:16:53.444 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.072Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-09T21:16:53.444 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.073Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-09T21:16:53.444 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:16:53 vm10 bash[50706]: ts=2026-03-09T21:16:53.073Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-09T21:16:54.736 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:16:54 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:54.736 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:54 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:54.736 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:54 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:54.736 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:54 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:54.736 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:16:54 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:54.737 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 21:16:54 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:54.737 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 21:16:54 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:54.737 INFO:journalctl@ceph.rgw.foo.a.vm07.stdout:Mar 09 21:16:54 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:54.737 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:54 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:55.091 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:16:54 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:55.091 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:54 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:55.092 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:54 vm07 bash[20771]: cluster 2026-03-09T21:16:53.722195+0000 mgr.y (mgr.24416) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T21:16:55.092 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:54 vm07 bash[20771]: cluster 2026-03-09T21:16:53.722195+0000 mgr.y (mgr.24416) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T21:16:55.092 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:54 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:55.092 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:54 vm07 bash[28052]: cluster 2026-03-09T21:16:53.722195+0000 mgr.y (mgr.24416) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T21:16:55.092 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:54 vm07 bash[28052]: cluster 2026-03-09T21:16:53.722195+0000 mgr.y (mgr.24416) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T21:16:55.092 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:54 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:55.092 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:16:54 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:55.092 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 21:16:54 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:55.092 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 21:16:54 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:55.092 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:16:54 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:55.092 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:16:54 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:55.092 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:16:54 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:55.092 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:16:54 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:55.092 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:16:54 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:55.092 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:16:54 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:55.092 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:16:54 vm07 systemd[1]: Started Ceph alertmanager.a for 22c897f4-1bfc-11f1-adaa-13127443f8b3. 2026-03-09T21:16:55.093 INFO:journalctl@ceph.rgw.foo.a.vm07.stdout:Mar 09 21:16:54 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:55.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:54 vm10 bash[23387]: cluster 2026-03-09T21:16:53.722195+0000 mgr.y (mgr.24416) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T21:16:55.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:54 vm10 bash[23387]: cluster 2026-03-09T21:16:53.722195+0000 mgr.y (mgr.24416) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T21:16:55.365 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:55 vm07 bash[21040]: [09/Mar/2026:21:16:55] ENGINE Bus STOPPING 2026-03-09T21:16:55.365 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:55 vm07 bash[21040]: [09/Mar/2026:21:16:55] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T21:16:55.365 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:55 vm07 bash[21040]: [09/Mar/2026:21:16:55] ENGINE Bus STOPPED 2026-03-09T21:16:55.365 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:55 vm07 bash[21040]: [09/Mar/2026:21:16:55] ENGINE Bus STARTING 2026-03-09T21:16:55.365 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:55 vm07 bash[21040]: [09/Mar/2026:21:16:55] ENGINE Serving on http://:::9283 2026-03-09T21:16:55.365 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:55 vm07 bash[21040]: [09/Mar/2026:21:16:55] ENGINE Bus STARTED 2026-03-09T21:16:55.365 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[55263]: ts=2026-03-09T21:16:55.161Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)" 2026-03-09T21:16:55.365 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[55263]: ts=2026-03-09T21:16:55.162Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)" 2026-03-09T21:16:55.365 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[55263]: ts=2026-03-09T21:16:55.164Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.123.107 port=9094 2026-03-09T21:16:55.365 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[55263]: ts=2026-03-09T21:16:55.166Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-09T21:16:55.365 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[55263]: ts=2026-03-09T21:16:55.203Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T21:16:55.365 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[55263]: ts=2026-03-09T21:16:55.203Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T21:16:55.365 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[55263]: ts=2026-03-09T21:16:55.205Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9093 2026-03-09T21:16:55.365 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[55263]: ts=2026-03-09T21:16:55.205Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9093 2026-03-09T21:16:55.692 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:16:55 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:16:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:55 vm07 bash[28052]: audit 2026-03-09T21:16:54.991791+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:55 vm07 bash[28052]: audit 2026-03-09T21:16:54.991791+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:55 vm07 bash[28052]: audit 2026-03-09T21:16:54.999960+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:55 vm07 bash[28052]: audit 2026-03-09T21:16:54.999960+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:55 vm07 bash[28052]: audit 2026-03-09T21:16:55.007737+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:55 vm07 bash[28052]: audit 2026-03-09T21:16:55.007737+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:55 vm07 bash[28052]: audit 2026-03-09T21:16:55.015507+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:55 vm07 bash[28052]: audit 2026-03-09T21:16:55.015507+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:55 vm07 bash[28052]: cephadm 2026-03-09T21:16:55.023027+0000 mgr.y (mgr.24416) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T21:16:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:55 vm07 bash[28052]: cephadm 2026-03-09T21:16:55.023027+0000 mgr.y (mgr.24416) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T21:16:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:55 vm07 bash[28052]: audit 2026-03-09T21:16:55.067798+0000 mon.a (mon.0) 770 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:55 vm07 bash[28052]: audit 2026-03-09T21:16:55.067798+0000 mon.a (mon.0) 770 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:55 vm07 bash[28052]: audit 2026-03-09T21:16:55.074190+0000 mon.a (mon.0) 771 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:55 vm07 bash[28052]: audit 2026-03-09T21:16:55.074190+0000 mon.a (mon.0) 771 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:55 vm07 bash[28052]: audit 2026-03-09T21:16:55.079558+0000 mon.c (mon.2) 40 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T21:16:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:55 vm07 bash[28052]: audit 2026-03-09T21:16:55.079558+0000 mon.c (mon.2) 40 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T21:16:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:55 vm07 bash[28052]: audit 2026-03-09T21:16:55.080024+0000 mgr.y (mgr.24416) 30 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T21:16:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:55 vm07 bash[28052]: audit 2026-03-09T21:16:55.080024+0000 mgr.y (mgr.24416) 30 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T21:16:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:55 vm07 bash[28052]: audit 2026-03-09T21:16:55.086642+0000 mon.a (mon.0) 772 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:55 vm07 bash[28052]: audit 2026-03-09T21:16:55.086642+0000 mon.a (mon.0) 772 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:55 vm07 bash[28052]: cephadm 2026-03-09T21:16:55.099415+0000 mgr.y (mgr.24416) 31 : cephadm [INF] Deploying daemon grafana.a on vm10 2026-03-09T21:16:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:55 vm07 bash[28052]: cephadm 2026-03-09T21:16:55.099415+0000 mgr.y (mgr.24416) 31 : cephadm [INF] Deploying daemon grafana.a on vm10 2026-03-09T21:16:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:55 vm07 bash[28052]: cluster 2026-03-09T21:16:55.722572+0000 mgr.y (mgr.24416) 32 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T21:16:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:55 vm07 bash[28052]: cluster 2026-03-09T21:16:55.722572+0000 mgr.y (mgr.24416) 32 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T21:16:56.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[20771]: audit 2026-03-09T21:16:54.991791+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[20771]: audit 2026-03-09T21:16:54.991791+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[20771]: audit 2026-03-09T21:16:54.999960+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[20771]: audit 2026-03-09T21:16:54.999960+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[20771]: audit 2026-03-09T21:16:55.007737+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[20771]: audit 2026-03-09T21:16:55.007737+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[20771]: audit 2026-03-09T21:16:55.015507+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[20771]: audit 2026-03-09T21:16:55.015507+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[20771]: cephadm 2026-03-09T21:16:55.023027+0000 mgr.y (mgr.24416) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T21:16:56.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[20771]: cephadm 2026-03-09T21:16:55.023027+0000 mgr.y (mgr.24416) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T21:16:56.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[20771]: audit 2026-03-09T21:16:55.067798+0000 mon.a (mon.0) 770 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[20771]: audit 2026-03-09T21:16:55.067798+0000 mon.a (mon.0) 770 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[20771]: audit 2026-03-09T21:16:55.074190+0000 mon.a (mon.0) 771 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[20771]: audit 2026-03-09T21:16:55.074190+0000 mon.a (mon.0) 771 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[20771]: audit 2026-03-09T21:16:55.079558+0000 mon.c (mon.2) 40 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T21:16:56.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[20771]: audit 2026-03-09T21:16:55.079558+0000 mon.c (mon.2) 40 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T21:16:56.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[20771]: audit 2026-03-09T21:16:55.080024+0000 mgr.y (mgr.24416) 30 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T21:16:56.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[20771]: audit 2026-03-09T21:16:55.080024+0000 mgr.y (mgr.24416) 30 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T21:16:56.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[20771]: audit 2026-03-09T21:16:55.086642+0000 mon.a (mon.0) 772 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[20771]: audit 2026-03-09T21:16:55.086642+0000 mon.a (mon.0) 772 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[20771]: cephadm 2026-03-09T21:16:55.099415+0000 mgr.y (mgr.24416) 31 : cephadm [INF] Deploying daemon grafana.a on vm10 2026-03-09T21:16:56.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[20771]: cephadm 2026-03-09T21:16:55.099415+0000 mgr.y (mgr.24416) 31 : cephadm [INF] Deploying daemon grafana.a on vm10 2026-03-09T21:16:56.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[20771]: cluster 2026-03-09T21:16:55.722572+0000 mgr.y (mgr.24416) 32 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T21:16:56.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:55 vm07 bash[20771]: cluster 2026-03-09T21:16:55.722572+0000 mgr.y (mgr.24416) 32 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T21:16:56.442 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:16:56 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:16:56.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:55 vm10 bash[23387]: audit 2026-03-09T21:16:54.991791+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:55 vm10 bash[23387]: audit 2026-03-09T21:16:54.991791+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:55 vm10 bash[23387]: audit 2026-03-09T21:16:54.999960+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:55 vm10 bash[23387]: audit 2026-03-09T21:16:54.999960+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:55 vm10 bash[23387]: audit 2026-03-09T21:16:55.007737+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:55 vm10 bash[23387]: audit 2026-03-09T21:16:55.007737+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:55 vm10 bash[23387]: audit 2026-03-09T21:16:55.015507+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:55 vm10 bash[23387]: audit 2026-03-09T21:16:55.015507+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:55 vm10 bash[23387]: cephadm 2026-03-09T21:16:55.023027+0000 mgr.y (mgr.24416) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T21:16:56.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:55 vm10 bash[23387]: cephadm 2026-03-09T21:16:55.023027+0000 mgr.y (mgr.24416) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T21:16:56.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:55 vm10 bash[23387]: audit 2026-03-09T21:16:55.067798+0000 mon.a (mon.0) 770 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:55 vm10 bash[23387]: audit 2026-03-09T21:16:55.067798+0000 mon.a (mon.0) 770 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:55 vm10 bash[23387]: audit 2026-03-09T21:16:55.074190+0000 mon.a (mon.0) 771 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:55 vm10 bash[23387]: audit 2026-03-09T21:16:55.074190+0000 mon.a (mon.0) 771 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:55 vm10 bash[23387]: audit 2026-03-09T21:16:55.079558+0000 mon.c (mon.2) 40 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T21:16:56.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:55 vm10 bash[23387]: audit 2026-03-09T21:16:55.079558+0000 mon.c (mon.2) 40 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T21:16:56.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:55 vm10 bash[23387]: audit 2026-03-09T21:16:55.080024+0000 mgr.y (mgr.24416) 30 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T21:16:56.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:55 vm10 bash[23387]: audit 2026-03-09T21:16:55.080024+0000 mgr.y (mgr.24416) 30 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T21:16:56.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:55 vm10 bash[23387]: audit 2026-03-09T21:16:55.086642+0000 mon.a (mon.0) 772 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:55 vm10 bash[23387]: audit 2026-03-09T21:16:55.086642+0000 mon.a (mon.0) 772 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:56.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:55 vm10 bash[23387]: cephadm 2026-03-09T21:16:55.099415+0000 mgr.y (mgr.24416) 31 : cephadm [INF] Deploying daemon grafana.a on vm10 2026-03-09T21:16:56.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:55 vm10 bash[23387]: cephadm 2026-03-09T21:16:55.099415+0000 mgr.y (mgr.24416) 31 : cephadm [INF] Deploying daemon grafana.a on vm10 2026-03-09T21:16:56.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:55 vm10 bash[23387]: cluster 2026-03-09T21:16:55.722572+0000 mgr.y (mgr.24416) 32 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T21:16:56.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:55 vm10 bash[23387]: cluster 2026-03-09T21:16:55.722572+0000 mgr.y (mgr.24416) 32 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T21:16:57.456 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.b/config 2026-03-09T21:16:57.615 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:16:57 vm07 bash[55263]: ts=2026-03-09T21:16:57.166Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.0002109s 2026-03-09T21:16:57.829 INFO:teuthology.orchestra.run.vm10.stdout:[client.1] 2026-03-09T21:16:57.829 INFO:teuthology.orchestra.run.vm10.stdout: key = AQDJOK9p/Gn/MBAABXonAI3eOeZHssOIydGD3A== 2026-03-09T21:16:57.954 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-09T21:16:57.955 DEBUG:teuthology.orchestra.run.vm10:> sudo dd of=/etc/ceph/ceph.client.1.keyring 2026-03-09T21:16:57.955 DEBUG:teuthology.orchestra.run.vm10:> sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-09T21:16:58.017 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-09T21:16:58.017 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-09T21:16:58.017 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph mgr dump --format=json 2026-03-09T21:16:58.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:57 vm07 bash[20771]: audit 2026-03-09T21:16:56.172745+0000 mgr.y (mgr.24416) 33 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:16:58.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:57 vm07 bash[20771]: audit 2026-03-09T21:16:56.172745+0000 mgr.y (mgr.24416) 33 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:16:58.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:57 vm07 bash[20771]: audit 2026-03-09T21:16:56.772982+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:58.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:57 vm07 bash[20771]: audit 2026-03-09T21:16:56.772982+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:58.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:57 vm07 bash[20771]: audit 2026-03-09T21:16:56.791296+0000 mon.c (mon.2) 41 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:16:58.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:57 vm07 bash[20771]: audit 2026-03-09T21:16:56.791296+0000 mon.c (mon.2) 41 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:16:58.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:57 vm07 bash[28052]: audit 2026-03-09T21:16:56.172745+0000 mgr.y (mgr.24416) 33 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:16:58.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:57 vm07 bash[28052]: audit 2026-03-09T21:16:56.172745+0000 mgr.y (mgr.24416) 33 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:16:58.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:57 vm07 bash[28052]: audit 2026-03-09T21:16:56.772982+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:58.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:57 vm07 bash[28052]: audit 2026-03-09T21:16:56.772982+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:58.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:57 vm07 bash[28052]: audit 2026-03-09T21:16:56.791296+0000 mon.c (mon.2) 41 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:16:58.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:57 vm07 bash[28052]: audit 2026-03-09T21:16:56.791296+0000 mon.c (mon.2) 41 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:16:58.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:57 vm10 bash[23387]: audit 2026-03-09T21:16:56.172745+0000 mgr.y (mgr.24416) 33 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:16:58.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:57 vm10 bash[23387]: audit 2026-03-09T21:16:56.172745+0000 mgr.y (mgr.24416) 33 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:16:58.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:57 vm10 bash[23387]: audit 2026-03-09T21:16:56.772982+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:58.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:57 vm10 bash[23387]: audit 2026-03-09T21:16:56.772982+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:16:58.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:57 vm10 bash[23387]: audit 2026-03-09T21:16:56.791296+0000 mon.c (mon.2) 41 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:16:58.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:57 vm10 bash[23387]: audit 2026-03-09T21:16:56.791296+0000 mon.c (mon.2) 41 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:16:59.115 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:16:58 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:16:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:16:59.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:58 vm07 bash[20771]: cluster 2026-03-09T21:16:57.723071+0000 mgr.y (mgr.24416) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T21:16:59.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:58 vm07 bash[20771]: cluster 2026-03-09T21:16:57.723071+0000 mgr.y (mgr.24416) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T21:16:59.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:58 vm07 bash[20771]: audit 2026-03-09T21:16:57.820778+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.110:0/2858638029' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T21:16:59.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:58 vm07 bash[20771]: audit 2026-03-09T21:16:57.820778+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.110:0/2858638029' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T21:16:59.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:58 vm07 bash[20771]: audit 2026-03-09T21:16:57.821907+0000 mon.a (mon.0) 774 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T21:16:59.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:58 vm07 bash[20771]: audit 2026-03-09T21:16:57.821907+0000 mon.a (mon.0) 774 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T21:16:59.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:58 vm07 bash[20771]: audit 2026-03-09T21:16:57.824826+0000 mon.a (mon.0) 775 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T21:16:59.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:16:58 vm07 bash[20771]: audit 2026-03-09T21:16:57.824826+0000 mon.a (mon.0) 775 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T21:16:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:58 vm07 bash[28052]: cluster 2026-03-09T21:16:57.723071+0000 mgr.y (mgr.24416) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T21:16:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:58 vm07 bash[28052]: cluster 2026-03-09T21:16:57.723071+0000 mgr.y (mgr.24416) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T21:16:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:58 vm07 bash[28052]: audit 2026-03-09T21:16:57.820778+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.110:0/2858638029' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T21:16:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:58 vm07 bash[28052]: audit 2026-03-09T21:16:57.820778+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.110:0/2858638029' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T21:16:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:58 vm07 bash[28052]: audit 2026-03-09T21:16:57.821907+0000 mon.a (mon.0) 774 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T21:16:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:58 vm07 bash[28052]: audit 2026-03-09T21:16:57.821907+0000 mon.a (mon.0) 774 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T21:16:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:58 vm07 bash[28052]: audit 2026-03-09T21:16:57.824826+0000 mon.a (mon.0) 775 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T21:16:59.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:16:58 vm07 bash[28052]: audit 2026-03-09T21:16:57.824826+0000 mon.a (mon.0) 775 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T21:16:59.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:58 vm10 bash[23387]: cluster 2026-03-09T21:16:57.723071+0000 mgr.y (mgr.24416) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T21:16:59.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:58 vm10 bash[23387]: cluster 2026-03-09T21:16:57.723071+0000 mgr.y (mgr.24416) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T21:16:59.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:58 vm10 bash[23387]: audit 2026-03-09T21:16:57.820778+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.110:0/2858638029' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T21:16:59.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:58 vm10 bash[23387]: audit 2026-03-09T21:16:57.820778+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.110:0/2858638029' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T21:16:59.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:58 vm10 bash[23387]: audit 2026-03-09T21:16:57.821907+0000 mon.a (mon.0) 774 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T21:16:59.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:58 vm10 bash[23387]: audit 2026-03-09T21:16:57.821907+0000 mon.a (mon.0) 774 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T21:16:59.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:58 vm10 bash[23387]: audit 2026-03-09T21:16:57.824826+0000 mon.a (mon.0) 775 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T21:16:59.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:16:58 vm10 bash[23387]: audit 2026-03-09T21:16:57.824826+0000 mon.a (mon.0) 775 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T21:17:01.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:00 vm07 bash[20771]: cluster 2026-03-09T21:16:59.723714+0000 mgr.y (mgr.24416) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T21:17:01.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:00 vm07 bash[20771]: cluster 2026-03-09T21:16:59.723714+0000 mgr.y (mgr.24416) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T21:17:01.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:00 vm07 bash[28052]: cluster 2026-03-09T21:16:59.723714+0000 mgr.y (mgr.24416) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T21:17:01.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:00 vm07 bash[28052]: cluster 2026-03-09T21:16:59.723714+0000 mgr.y (mgr.24416) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T21:17:01.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:00 vm10 bash[23387]: cluster 2026-03-09T21:16:59.723714+0000 mgr.y (mgr.24416) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T21:17:01.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:00 vm10 bash[23387]: cluster 2026-03-09T21:16:59.723714+0000 mgr.y (mgr.24416) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T21:17:02.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:01 vm07 bash[20771]: cluster 2026-03-09T21:17:01.724055+0000 mgr.y (mgr.24416) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:02.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:01 vm07 bash[20771]: cluster 2026-03-09T21:17:01.724055+0000 mgr.y (mgr.24416) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:02.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:01 vm07 bash[28052]: cluster 2026-03-09T21:17:01.724055+0000 mgr.y (mgr.24416) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:02.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:01 vm07 bash[28052]: cluster 2026-03-09T21:17:01.724055+0000 mgr.y (mgr.24416) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:02.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:01 vm10 bash[23387]: cluster 2026-03-09T21:17:01.724055+0000 mgr.y (mgr.24416) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:02.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:01 vm10 bash[23387]: cluster 2026-03-09T21:17:01.724055+0000 mgr.y (mgr.24416) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:02.688 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:17:03.013 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:17:03.236 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":22,"flags":0,"active_gid":24416,"active_name":"y","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6800","nonce":1058742449}]},"active_addr":"192.168.123.107:6800/1058742449","active_change":"2026-03-09T21:16:41.687271+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[{"gid":24427,"name":"x","mgr_features":4540701547738038271,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","prometheus","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.107:8443/","prometheus":"http://192.168.123.107:9283/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":66,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.107:0","nonce":2775570082}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.107:0","nonce":2296204981}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.107:0","nonce":2613143045}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.107:0","nonce":1038431478}]}]} 2026-03-09T21:17:03.238 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-09T21:17:03.238 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-09T21:17:03.238 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph osd dump --format=json 2026-03-09T21:17:03.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:03 vm07 bash[20771]: audit 2026-03-09T21:17:03.010163+0000 mon.b (mon.1) 37 : audit [DBG] from='client.? 192.168.123.107:0/3160195356' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T21:17:03.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:03 vm07 bash[20771]: audit 2026-03-09T21:17:03.010163+0000 mon.b (mon.1) 37 : audit [DBG] from='client.? 192.168.123.107:0/3160195356' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T21:17:03.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:03 vm07 bash[28052]: audit 2026-03-09T21:17:03.010163+0000 mon.b (mon.1) 37 : audit [DBG] from='client.? 192.168.123.107:0/3160195356' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T21:17:03.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:03 vm07 bash[28052]: audit 2026-03-09T21:17:03.010163+0000 mon.b (mon.1) 37 : audit [DBG] from='client.? 192.168.123.107:0/3160195356' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T21:17:03.939 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:03 vm10 bash[23387]: audit 2026-03-09T21:17:03.010163+0000 mon.b (mon.1) 37 : audit [DBG] from='client.? 192.168.123.107:0/3160195356' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T21:17:03.939 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:03 vm10 bash[23387]: audit 2026-03-09T21:17:03.010163+0000 mon.b (mon.1) 37 : audit [DBG] from='client.? 192.168.123.107:0/3160195356' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T21:17:04.191 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:04 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:17:04.441 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:17:04 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:17:04.441 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:04 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:17:04.441 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:04 vm10 bash[23387]: cluster 2026-03-09T21:17:03.724629+0000 mgr.y (mgr.24416) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:04.441 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:04 vm10 bash[23387]: cluster 2026-03-09T21:17:03.724629+0000 mgr.y (mgr.24416) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:04.441 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 09 21:17:04 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:17:04.441 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:17:04.441 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:17:04.441 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:17:04.442 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:17:04 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:17:04.442 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:17:04 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:17:04.442 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:17:04 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:17:04.442 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:17:04 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:17:04.442 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:17:04 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:17:04.692 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:17:04 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:17:04.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:04 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:17:04.692 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 09 21:17:04 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:17:04.692 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:17:04.692 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 systemd[1]: Started Ceph grafana.a for 22c897f4-1bfc-11f1-adaa-13127443f8b3. 2026-03-09T21:17:04.692 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:04 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:17:04.692 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:17:04 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:17:04.692 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:17:04 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:17:04.692 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:17:04 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:17:04.693 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:17:04 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:17:04.693 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:17:04 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:17:04.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:04 vm07 bash[20771]: cluster 2026-03-09T21:17:03.724629+0000 mgr.y (mgr.24416) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:04.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:04 vm07 bash[20771]: cluster 2026-03-09T21:17:03.724629+0000 mgr.y (mgr.24416) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:04.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:04 vm07 bash[28052]: cluster 2026-03-09T21:17:03.724629+0000 mgr.y (mgr.24416) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:04.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:04 vm07 bash[28052]: cluster 2026-03-09T21:17:03.724629+0000 mgr.y (mgr.24416) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:05.057 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=settings t=2026-03-09T21:17:04.805990921Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-03-09T21:17:04Z 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=settings t=2026-03-09T21:17:04.806902296Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=settings t=2026-03-09T21:17:04.807258193Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=settings t=2026-03-09T21:17:04.807314527Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=settings t=2026-03-09T21:17:04.807658031Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=settings t=2026-03-09T21:17:04.807711181Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=settings t=2026-03-09T21:17:04.807934739Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=settings t=2026-03-09T21:17:04.80798309Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=settings t=2026-03-09T21:17:04.808305343Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=settings t=2026-03-09T21:17:04.808353253Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=settings t=2026-03-09T21:17:04.808558788Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=settings t=2026-03-09T21:17:04.808605876Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=settings t=2026-03-09T21:17:04.808829383Z level=info msg=Target target=[all] 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=settings t=2026-03-09T21:17:04.808888805Z level=info msg="Path Home" path=/usr/share/grafana 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=settings t=2026-03-09T21:17:04.809112173Z level=info msg="Path Data" path=/var/lib/grafana 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=settings t=2026-03-09T21:17:04.809158711Z level=info msg="Path Logs" path=/var/log/grafana 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=settings t=2026-03-09T21:17:04.809347344Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=settings t=2026-03-09T21:17:04.809401274Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=settings t=2026-03-09T21:17:04.809600747Z level=info msg="App mode production" 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=sqlstore t=2026-03-09T21:17:04.810131762Z level=info msg="Connecting to DB" dbtype=sqlite3 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=sqlstore t=2026-03-09T21:17:04.810356212Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r----- 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.81101729Z level=info msg="Starting DB migrations" 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.812085629Z level=info msg="Executing migration" id="create migration_log table" 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.812874075Z level=info msg="Migration successfully executed" id="create migration_log table" duration=786.993µs 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.815036543Z level=info msg="Executing migration" id="create user table" 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.816200413Z level=info msg="Migration successfully executed" id="create user table" duration=1.166224ms 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.818731992Z level=info msg="Executing migration" id="add unique index user.login" 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.820139005Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.409308ms 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.822713914Z level=info msg="Executing migration" id="add unique index user.email" 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.823855271Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.143692ms 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.825557498Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.826213867Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=656.579µs 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.828076363Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.828619289Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=543.007µs 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.829772398Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.830952718Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=1.177785ms 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.832333532Z level=info msg="Executing migration" id="create user table v2" 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.832901485Z level=info msg="Migration successfully executed" id="create user table v2" duration=564.236µs 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.834653215Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.835254089Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=600.734µs 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.836418339Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.836953731Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=533.258µs 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.838414866Z level=info msg="Executing migration" id="copy data_source v1 to v2" 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.838786833Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=371.896µs 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.840450146Z level=info msg="Executing migration" id="Drop old table user_v1" 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.840884339Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=434.193µs 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.842009946Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.842668138Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=658.152µs 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.844014368Z level=info msg="Executing migration" id="Update user table charset" 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.844221586Z level=info msg="Migration successfully executed" id="Update user table charset" duration=209.982µs 2026-03-09T21:17:05.058 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.845459493Z level=info msg="Executing migration" id="Add last_seen_at column to user" 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.8461204Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=663.932µs 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.847964552Z level=info msg="Executing migration" id="Add missing user data" 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.848297005Z level=info msg="Migration successfully executed" id="Add missing user data" duration=332.323µs 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.84960875Z level=info msg="Executing migration" id="Add is_disabled column to user" 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.850352743Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=745.426µs 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.851901111Z level=info msg="Executing migration" id="Add index user.login/user.email" 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.852651586Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=750.284µs 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.854088697Z level=info msg="Executing migration" id="Add is_service_account column to user" 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.855050627Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=961.749µs 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.856937258Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.860870632Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=3.932782ms 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.862557118Z level=info msg="Executing migration" id="Add uid column to user" 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.86342346Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=865.991µs 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.86468884Z level=info msg="Executing migration" id="Update uid column values for users" 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.865041079Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=352.56µs 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.86651112Z level=info msg="Executing migration" id="Add unique index user_uid" 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.867149936Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=638.846µs 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.869272129Z level=info msg="Executing migration" id="create temp user table v1-7" 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.869949758Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=677.308µs 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.871660901Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.872301139Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=637.402µs 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.874155811Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.874797371Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=641.179µs 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.876759906Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.877436642Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=676.646µs 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.879104324Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.879758148Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=651.499µs 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.881444224Z level=info msg="Executing migration" id="Update temp_user table charset" 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.881673414Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=229.842µs 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.883505503Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.884131845Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=626.413µs 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.885369101Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.885994993Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=632.925µs 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.887540867Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.888236359Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=698.076µs 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.889902899Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.890503774Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=601.095µs 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.892012236Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.893718581Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=1.706123ms 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.895285444Z level=info msg="Executing migration" id="create temp_user v2" 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.895972991Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=687.146µs 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.89760721Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.898215479Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=608.029µs 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.899744782Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.900363501Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=617.316µs 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.9015979Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.902358865Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=762.488µs 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.904211884Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.904983388Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=771.263µs 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.90671521Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.907217871Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=502.731µs 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.908521801Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.909111485Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=589.294µs 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.910674221Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.911210114Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=535.784µs 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.912968416Z level=info msg="Executing migration" id="create star table" 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.913621959Z level=info msg="Migration successfully executed" id="create star table" duration=656.338µs 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.915144018Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.915822949Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=678.551µs 2026-03-09T21:17:05.059 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.917374223Z level=info msg="Executing migration" id="create org table v1" 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.91801944Z level=info msg="Migration successfully executed" id="create org table v1" duration=645.188µs 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.919965695Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.920526885Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=561.27µs 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.92205239Z level=info msg="Executing migration" id="create org_user table v1" 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.922603863Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=551.263µs 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.924255494Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.92493123Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=681.695µs 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.92654474Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.927244389Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=699.678µs 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.929159645Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.929770899Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=611.184µs 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.931403466Z level=info msg="Executing migration" id="Update org table charset" 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.93166266Z level=info msg="Migration successfully executed" id="Update org table charset" duration=259.735µs 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.933090613Z level=info msg="Executing migration" id="Update org_user table charset" 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.93332411Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=234.118µs 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.934914137Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.935261697Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=347.74µs 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.937087975Z level=info msg="Executing migration" id="create dashboard table" 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.937793006Z level=info msg="Migration successfully executed" id="create dashboard table" duration=704.959µs 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.939510871Z level=info msg="Executing migration" id="add index dashboard.account_id" 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.940267708Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=756.777µs 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.941898451Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.942500127Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=601.837µs 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.944381568Z level=info msg="Executing migration" id="create dashboard_tag table" 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.945016497Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=634.859µs 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.94699404Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.947708838Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=715.84µs 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.949183077Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.949802116Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=618.918µs 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.95124641Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.954493177Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=3.246888ms 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.956802681Z level=info msg="Executing migration" id="create dashboard v2" 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.957180307Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=377.576µs 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.958667702Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.959059986Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=393.216µs 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.9607233Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.961118009Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=393.827µs 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.962558215Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.962936623Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=380.102µs 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.964332536Z level=info msg="Executing migration" id="drop table dashboard_v1" 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.964966553Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=633.907µs 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.966945968Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.967032129Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=97.502µs 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.968262493Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.969253848Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=990.684µs 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.970563289Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.971305839Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=742.521µs 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.972735275Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.973739165Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.00386ms 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.974989064Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.975460748Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=471.823µs 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.976495214Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.977229178Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=737.02µs 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.97882787Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.979309061Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=479.025µs 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.981457994Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.981944514Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=487.12µs 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.983357269Z level=info msg="Executing migration" id="Update dashboard table charset" 2026-03-09T21:17:05.060 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.983377196Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=21.22µs 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.984541836Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.98456044Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=19.417µs 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.985996068Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.987122698Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.126639ms 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.98843282Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.989385734Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=953.236µs 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.990607501Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.991669689Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.061747ms 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.993314779Z level=info msg="Executing migration" id="Add column uid in dashboard" 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.994396885Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.08973ms 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.995690666Z level=info msg="Executing migration" id="Update uid column values in dashboard" 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.995854573Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=164.447µs 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.996972836Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.997505353Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=525.624µs 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:04 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.999404428Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:04.999972652Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=568.965µs 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.001076669Z level=info msg="Executing migration" id="Update dashboard title length" 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.001090415Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=14.186µs 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.002132996Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.002636719Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=504.625µs 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.004318808Z level=info msg="Executing migration" id="create dashboard_provisioning" 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.004767879Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=445.875µs 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.006055208Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.007645886Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=1.590236ms 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.008894222Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.009359263Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=458.489µs 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.010595917Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.010998431Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=402.233µs 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.012558863Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.012959382Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=400.98µs 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.014140103Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.014356237Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=216.375µs 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.015512462Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.015819907Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=307.415µs 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.017101236Z level=info msg="Executing migration" id="Add check_sum column" 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.018164216Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.060125ms 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.019466063Z level=info msg="Executing migration" id="Add index for dashboard_title" 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.020002377Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=536.113µs 2026-03-09T21:17:05.061 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.021054054Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.021196532Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=140.164µs 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.02279331Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.022959863Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=166.963µs 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.024348762Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.024957762Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=609.822µs 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.026261423Z level=info msg="Executing migration" id="Add isPublic for dashboard" 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.027304486Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=1.043514ms 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.028969743Z level=info msg="Executing migration" id="create data_source table" 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.029478354Z level=info msg="Migration successfully executed" id="create data_source table" duration=509.814µs 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.030847558Z level=info msg="Executing migration" id="add index data_source.account_id" 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.031362222Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=514.975µs 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.0326537Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.033102088Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=448.219µs 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.034602036Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.035039635Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=443.081µs 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.035961912Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.036330832Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=369.13µs 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.037240044Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.039274964Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=2.034088ms 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.041140105Z level=info msg="Executing migration" id="create data_source table v2" 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.04159721Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=453.408µs 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.042735161Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.043155137Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=420.055µs 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.044189132Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.044627092Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=437.56µs 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.046171373Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.046577283Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=405.719µs 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.047847661Z level=info msg="Executing migration" id="Add column with_credentials" 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.049173203Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=1.302087ms 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.050437939Z level=info msg="Executing migration" id="Add secure json data column" 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.051598352Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=1.160141ms 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.053097659Z level=info msg="Executing migration" id="Update data_source table charset" 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.053116124Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=19.246µs 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.054394205Z level=info msg="Executing migration" id="Update initial version to 1" 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.054570355Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=177.282µs 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.055936363Z level=info msg="Executing migration" id="Add read_only data column" 2026-03-09T21:17:05.062 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.056880059Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=941.593µs 2026-03-09T21:17:05.310 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.060521655Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 2026-03-09T21:17:05.310 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.060967521Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=448.7µs 2026-03-09T21:17:05.310 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.062407386Z level=info msg="Executing migration" id="Update json_data with nulls" 2026-03-09T21:17:05.310 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.062728667Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=318.396µs 2026-03-09T21:17:05.310 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.063999917Z level=info msg="Executing migration" id="Add uid column" 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.065292827Z level=info msg="Migration successfully executed" id="Add uid column" duration=1.28767ms 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.066512129Z level=info msg="Executing migration" id="Update uid value" 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.066769181Z level=info msg="Migration successfully executed" id="Update uid value" duration=257.311µs 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.068770998Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.069611282Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=836.005µs 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.071150272Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.071830485Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=680.733µs 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.073246896Z level=info msg="Executing migration" id="create api_key table" 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.073942579Z level=info msg="Migration successfully executed" id="create api_key table" duration=695.623µs 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.076253335Z level=info msg="Executing migration" id="add index api_key.account_id" 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.077034527Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=781.754µs 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.078574721Z level=info msg="Executing migration" id="add index api_key.key" 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.079263298Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=688.969µs 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.080713573Z level=info msg="Executing migration" id="add index api_key.account_id_name" 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.081303438Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=590.055µs 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.083272042Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.083995928Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=724.647µs 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.085298146Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.08591988Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=621.894µs 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.087645801Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.088333028Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=687.086µs 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.089545036Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.09217066Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=2.623301ms 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.093847029Z level=info msg="Executing migration" id="create api_key table v2" 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.09454217Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=695.943µs 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.096438399Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.09713335Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=695.062µs 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.098611768Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.099429118Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=818.042µs 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.100732558Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.101354503Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=627.104µs 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.103085693Z level=info msg="Executing migration" id="copy api_key v1 to v2" 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.103525346Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=440.313µs 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.104595138Z level=info msg="Executing migration" id="Drop old table api_key_v1" 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.105096216Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=503.122µs 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.106348461Z level=info msg="Executing migration" id="Update api_key table charset" 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.10650799Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=159.107µs 2026-03-09T21:17:05.311 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.10814821Z level=info msg="Executing migration" id="Add expires to api_key table" 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.109308762Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=1.160402ms 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.111272328Z level=info msg="Executing migration" id="Add service account foreign key" 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.112423873Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=1.152377ms 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.113630904Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.11392306Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=289.031µs 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.115274961Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.116547302Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=1.272112ms 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.118478226Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.12016848Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=1.686457ms 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.121753858Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.122650787Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=897.42µs 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.124432192Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.125143704Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=714.398µs 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.126871699Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.127777024Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=902.629µs 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.129162868Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.129761528Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=596.386µs 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.131655064Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.132422229Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=772.907µs 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.134000153Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.135045399Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.045057ms 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.136541239Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.136728621Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=187.592µs 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.1378075Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.137951519Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=26.79µs 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.139658053Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.14087405Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=1.215947ms 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.142149108Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.14331501Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=1.165451ms 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.144690285Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.144852889Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=162.704µs 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.1465312Z level=info msg="Executing migration" id="create quota table v1" 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.147132366Z level=info msg="Migration successfully executed" id="create quota table v1" duration=601.837µs 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.148516887Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.149147548Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=630.12µs 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.150597442Z level=info msg="Executing migration" id="Update quota table charset" 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.150806783Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=210.113µs 2026-03-09T21:17:05.312 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.152039771Z level=info msg="Executing migration" id="create plugin_setting table" 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.153867363Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.826149ms 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.156489051Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.15849705Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=2.008912ms 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.161283896Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.16298469Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=1.699791ms 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.164716853Z level=info msg="Executing migration" id="Update plugin_setting table charset" 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.164740337Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=25.187µs 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.166832562Z level=info msg="Executing migration" id="create session table" 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.167779996Z level=info msg="Migration successfully executed" id="create session table" duration=946.974µs 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.169541894Z level=info msg="Executing migration" id="Drop old table playlist table" 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.169828321Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=287.337µs 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.171346082Z level=info msg="Executing migration" id="Drop old table playlist_item table" 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.171534154Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=187.781µs 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.173075299Z level=info msg="Executing migration" id="create playlist table v2" 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.173862001Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=785.771µs 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.175664967Z level=info msg="Executing migration" id="create playlist item table v2" 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.176513545Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=849.832µs 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.178257269Z level=info msg="Executing migration" id="Update playlist table charset" 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.178289821Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=34.805µs 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.180040347Z level=info msg="Executing migration" id="Update playlist_item table charset" 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.180066166Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=27.621µs 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.182091747Z level=info msg="Executing migration" id="Add playlist column created_at" 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.183713934Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=1.618189ms 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.18565112Z level=info msg="Executing migration" id="Add playlist column updated_at" 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.187357484Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=1.716073ms 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.189362798Z level=info msg="Executing migration" id="drop preferences table v2" 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.189441244Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=79.739µs 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.191863559Z level=info msg="Executing migration" id="drop preferences table v3" 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.191969778Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=110.056µs 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.193790237Z level=info msg="Executing migration" id="create preferences table v3" 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.194671346Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=880.799µs 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.1964793Z level=info msg="Executing migration" id="Update preferences table charset" 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.19651669Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=41.338µs 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.198532393Z level=info msg="Executing migration" id="Add column team_id in preferences" 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.200268834Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=1.734778ms 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.202647737Z level=info msg="Executing migration" id="Update team_id column values in preferences" 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.203230357Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=580.466µs 2026-03-09T21:17:05.313 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.204812128Z level=info msg="Executing migration" id="Add column week_start in preferences" 2026-03-09T21:17:05.314 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.206377459Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=1.562244ms 2026-03-09T21:17:05.314 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.20820557Z level=info msg="Executing migration" id="Add column preferences.json_data" 2026-03-09T21:17:05.314 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.209654333Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=1.446869ms 2026-03-09T21:17:05.314 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.211878677Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 2026-03-09T21:17:05.314 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.212349598Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=471.994µs 2026-03-09T21:17:05.314 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.214026827Z level=info msg="Executing migration" id="Add preferences index org_id" 2026-03-09T21:17:05.314 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.214824651Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=798.054µs 2026-03-09T21:17:05.314 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.216310272Z level=info msg="Executing migration" id="Add preferences index user_id" 2026-03-09T21:17:05.314 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.216874458Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=564.427µs 2026-03-09T21:17:05.314 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.218555836Z level=info msg="Executing migration" id="create alert table v1" 2026-03-09T21:17:05.314 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.219297645Z level=info msg="Migration successfully executed" id="create alert table v1" duration=738.543µs 2026-03-09T21:17:05.314 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.220798374Z level=info msg="Executing migration" id="add index alert org_id & id " 2026-03-09T21:17:05.314 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.221415689Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=617.535µs 2026-03-09T21:17:05.314 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.222999904Z level=info msg="Executing migration" id="add index alert state" 2026-03-09T21:17:05.314 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.223619805Z level=info msg="Migration successfully executed" id="add index alert state" duration=617.988µs 2026-03-09T21:17:05.314 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.225252331Z level=info msg="Executing migration" id="add index alert dashboard_id" 2026-03-09T21:17:05.314 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.22582868Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=575.087µs 2026-03-09T21:17:05.314 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.22725563Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 2026-03-09T21:17:05.314 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.227769222Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=512.961µs 2026-03-09T21:17:05.314 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.229163602Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 2026-03-09T21:17:05.314 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.229708152Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=546.943µs 2026-03-09T21:17:05.314 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.231403396Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 2026-03-09T21:17:05.314 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.231963874Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=561.35µs 2026-03-09T21:17:05.314 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.233080806Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.236287478Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=3.203597ms 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.238071357Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.238790694Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=719.187µs 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.240789936Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.24137934Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=588.452µs 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.24271539Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.243070245Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=355.696µs 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.244388232Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.244891274Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=503.392µs 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.246714166Z level=info msg="Executing migration" id="create alert_notification table v1" 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.247372579Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=658.132µs 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.248577745Z level=info msg="Executing migration" id="Add column is_default" 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.249876806Z level=info msg="Migration successfully executed" id="Add column is_default" duration=1.29846ms 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.2514652Z level=info msg="Executing migration" id="Add column frequency" 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.252738083Z level=info msg="Migration successfully executed" id="Add column frequency" duration=1.272252ms 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.25456334Z level=info msg="Executing migration" id="Add column send_reminder" 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.255876648Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=1.313148ms 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.256975865Z level=info msg="Executing migration" id="Add column disable_resolve_message" 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.25831959Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=1.343264ms 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.25965484Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.260303294Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=651.751µs 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.262080271Z level=info msg="Executing migration" id="Update alert table charset" 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.262094337Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=14.106µs 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.263699221Z level=info msg="Executing migration" id="Update alert_notification table charset" 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.263712736Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=13.916µs 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.264988335Z level=info msg="Executing migration" id="create notification_journal table v1" 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.26564822Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=656.71µs 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.267442979Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.268103847Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=660.507µs 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.269410402Z level=info msg="Executing migration" id="drop alert_notification_journal" 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.269976702Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=565.82µs 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.271256728Z level=info msg="Executing migration" id="create alert_notification_state table v1" 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.271753508Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=497.041µs 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.27322378Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 2026-03-09T21:17:05.315 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.273770313Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=546.863µs 2026-03-09T21:17:05.316 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.275090254Z level=info msg="Executing migration" id="Add for to alert table" 2026-03-09T21:17:05.316 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.276393744Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=1.302638ms 2026-03-09T21:17:05.316 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.277570276Z level=info msg="Executing migration" id="Add column uid in alert_notification" 2026-03-09T21:17:05.316 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.278932016Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=1.360738ms 2026-03-09T21:17:05.316 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.280714182Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 2026-03-09T21:17:05.316 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.280965271Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=250.538µs 2026-03-09T21:17:05.316 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.282124352Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 2026-03-09T21:17:05.316 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.282651378Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=527.016µs 2026-03-09T21:17:05.316 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.284199957Z level=info msg="Executing migration" id="Remove unique index org_id_name" 2026-03-09T21:17:05.316 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.28485894Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=659.535µs 2026-03-09T21:17:05.316 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.286774496Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 2026-03-09T21:17:05.316 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.288805288Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=2.028458ms 2026-03-09T21:17:05.316 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.29072958Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 2026-03-09T21:17:05.316 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.290979678Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=250.429µs 2026-03-09T21:17:05.316 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.292279021Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 2026-03-09T21:17:05.316 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.292947722Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=668.752µs 2026-03-09T21:17:05.316 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.295749086Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 2026-03-09T21:17:05.316 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.296716737Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=967.681µs 2026-03-09T21:17:05.316 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.298355365Z level=info msg="Executing migration" id="Drop old annotation table v4" 2026-03-09T21:17:05.316 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.298549239Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=193.632µs 2026-03-09T21:17:05.316 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.300008911Z level=info msg="Executing migration" id="create annotation table v5" 2026-03-09T21:17:05.316 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.300839816Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=830.715µs 2026-03-09T21:17:05.316 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.302731597Z level=info msg="Executing migration" id="add index annotation 0 v3" 2026-03-09T21:17:05.316 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.303479718Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=748.652µs 2026-03-09T21:17:05.316 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.304947775Z level=info msg="Executing migration" id="add index annotation 1 v3" 2026-03-09T21:17:05.316 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.305689724Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=741.828µs 2026-03-09T21:17:05.316 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.307309296Z level=info msg="Executing migration" id="add index annotation 2 v3" 2026-03-09T21:17:05.316 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.308005549Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=693.839µs 2026-03-09T21:17:05.316 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.309788047Z level=info msg="Executing migration" id="add index annotation 3 v3" 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.311209767Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.421049ms 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.312991493Z level=info msg="Executing migration" id="add index annotation 4 v3" 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.314019487Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.028086ms 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.315723427Z level=info msg="Executing migration" id="Update annotation table charset" 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.315968526Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=242.514µs 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.317661625Z level=info msg="Executing migration" id="Add column region_id to annotation table" 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.319704169Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=2.04056ms 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.321303993Z level=info msg="Executing migration" id="Drop category_id index" 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.321925506Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=622.155µs 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.323304067Z level=info msg="Executing migration" id="Add column tags to annotation table" 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.324671367Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=1.36742ms 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.326292993Z level=info msg="Executing migration" id="Create annotation_tag table v2" 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.326800893Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=507.32µs 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.327954934Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.328534589Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=577.18µs 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.329751196Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.330281259Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=530.002µs 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.332058435Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.335530454Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=3.471308ms 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.336947618Z level=info msg="Executing migration" id="Create annotation_tag table v3" 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.337449967Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=502.609µs 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.338670763Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.339416489Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=745.285µs 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.340971671Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.341220035Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=248.244µs 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.342288344Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.34271305Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=421.931µs 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.343843195Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.344029504Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=186.439µs 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.345430366Z level=info msg="Executing migration" id="Add created time to annotation table" 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.346849213Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=1.418826ms 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.348121474Z level=info msg="Executing migration" id="Add updated time to annotation table" 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.349403093Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=1.281048ms 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.350644657Z level=info msg="Executing migration" id="Add index for created in annotation table" 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.35119641Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=551.373µs 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.352650843Z level=info msg="Executing migration" id="Add index for updated in annotation table" 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.353103259Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=452.776µs 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.354512788Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.354742829Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=230.371µs 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.35579536Z level=info msg="Executing migration" id="Add epoch_end column" 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.357355881Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=1.56008ms 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.358634944Z level=info msg="Executing migration" id="Add index for epoch_end" 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.359179784Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=545.1µs 2026-03-09T21:17:05.561 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.360686596Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.360899163Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=207.608µs 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.362055077Z level=info msg="Executing migration" id="Move region to single row" 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.362435509Z level=info msg="Migration successfully executed" id="Move region to single row" duration=380.803µs 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.36385112Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.364755802Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=905.535µs 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.366623218Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.367373613Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=751.667µs 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.368589268Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.369218787Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=629.508µs 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.370506578Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.371155521Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=648.462µs 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.37273602Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.373365289Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=629.88µs 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.374595372Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.375281877Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=686.245µs 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.376768829Z level=info msg="Executing migration" id="Increase tags column to length 4096" 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.376948346Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=177.513µs 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.378334039Z level=info msg="Executing migration" id="create test_data table" 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.378989297Z level=info msg="Migration successfully executed" id="create test_data table" duration=655.156µs 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.38036391Z level=info msg="Executing migration" id="create dashboard_version table v1" 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.380893851Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=529.741µs 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.38232971Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.38287994Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=550.461µs 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.384544245Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.38516693Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=622.635µs 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.3867055Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.38699345Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=286.327µs 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.388258628Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.38860152Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=342.691µs 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.39002233Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.390062485Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=40.886µs 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.391451274Z level=info msg="Executing migration" id="create team table" 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.391996495Z level=info msg="Migration successfully executed" id="create team table" duration=544.789µs 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.393285158Z level=info msg="Executing migration" id="add index team.org_id" 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.393955452Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=671.266µs 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.395724213Z level=info msg="Executing migration" id="add unique index team_org_id_name" 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.396313736Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=589.853µs 2026-03-09T21:17:05.562 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.397676968Z level=info msg="Executing migration" id="Add column uid in team" 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.399436633Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=1.758461ms 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.400846602Z level=info msg="Executing migration" id="Update uid column values in team" 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.401075821Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=229.559µs 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.402405821Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.402992979Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=587.218µs 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.404706548Z level=info msg="Executing migration" id="create team member table" 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.405202426Z level=info msg="Migration successfully executed" id="create team member table" duration=495.958µs 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.406526695Z level=info msg="Executing migration" id="add index team_member.org_id" 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.40706307Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=532.336µs 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.408422784Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.408910127Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=487.053µs 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.410534698Z level=info msg="Executing migration" id="add index team_member.team_id" 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.411172141Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=634.817µs 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.412591607Z level=info msg="Executing migration" id="Add column email to team table" 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.414655322Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=2.058744ms 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.416250668Z level=info msg="Executing migration" id="Add column external to team_member table" 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.417942495Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=1.690856ms 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.419823646Z level=info msg="Executing migration" id="Add column permission to team_member table" 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.422146614Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=2.324582ms 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.423808395Z level=info msg="Executing migration" id="create dashboard acl table" 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.424545605Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=737.24µs 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.426163754Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.426951098Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=788.125µs 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.428891299Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.429715312Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=824.553µs 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.431437345Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.432121907Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=685.013µs 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.433651982Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.434289756Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=638.596µs 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.436138376Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.43680262Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=664.795µs 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.439238589Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.440309353Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.070884ms 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.442068246Z level=info msg="Executing migration" id="add index dashboard_permission" 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.442829942Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=762.237µs 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.444721663Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.445131119Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=409.316µs 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.446487419Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.446711448Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=223.859µs 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.448140183Z level=info msg="Executing migration" id="create tag table" 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.448712163Z level=info msg="Migration successfully executed" id="create tag table" duration=572.05µs 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.449993813Z level=info msg="Executing migration" id="add index tag.key_value" 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.450659259Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=665.175µs 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.45203279Z level=info msg="Executing migration" id="create login attempt table" 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.452553233Z level=info msg="Migration successfully executed" id="create login attempt table" duration=519.943µs 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.454158469Z level=info msg="Executing migration" id="add index login_attempt.username" 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.454744927Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=586.999µs 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.456049608Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.457615028Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.565139ms 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.45898884Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.463768307Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=4.7756ms 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.466689205Z level=info msg="Executing migration" id="create login_attempt v2" 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.467511694Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=823.401µs 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.468781111Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.469472576Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=695.382µs 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.47096513Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.471251285Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=289.541µs 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.472596504Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.472950355Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=353.452µs 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.474051095Z level=info msg="Executing migration" id="create user auth table" 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.474638606Z level=info msg="Migration successfully executed" id="create user auth table" duration=587.109µs 2026-03-09T21:17:05.563 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.475968966Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.476796927Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=827.589µs 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.478586035Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.478641359Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=56.036µs 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.480184448Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.482868621Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=2.682731ms 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.484422761Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.486322206Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=1.901138ms 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.488157513Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.489843008Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=1.685184ms 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.49139911Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.493018581Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=1.617729ms 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.494442166Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.494907919Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=466.083µs 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.496486614Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.498111335Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=1.625792ms 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.499347449Z level=info msg="Executing migration" id="create server_lock table" 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.499742368Z level=info msg="Migration successfully executed" id="create server_lock table" duration=394.589µs 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.501348936Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.502070517Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=721.05µs 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.504058939Z level=info msg="Executing migration" id="create user auth token table" 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.504726057Z level=info msg="Migration successfully executed" id="create user auth token table" duration=667.49µs 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.50615395Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.506808786Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=653.192µs 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.50828536Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.508925478Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=640.791µs 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.510775371Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.511732933Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=957.743µs 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.513219977Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.515170099Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=1.949739ms 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.51666745Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.517308891Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=639.207µs 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.519099864Z level=info msg="Executing migration" id="create cache_data table" 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.519709155Z level=info msg="Migration successfully executed" id="create cache_data table" duration=609.15µs 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.521135806Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.521734186Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=599.072µs 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.52326411Z level=info msg="Executing migration" id="create short_url table v1" 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.523865224Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=603.94µs 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.525700961Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.526344907Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=643.996µs 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.527811672Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.528017258Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=205.637µs 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.529110484Z level=info msg="Executing migration" id="delete alert_definition table" 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.529317281Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=206.556µs 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.530869747Z level=info msg="Executing migration" id="recreate alert_definition table" 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.531462878Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=593.08µs 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.532782007Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.533337817Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=555.691µs 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.534674138Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.535306632Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=629.648µs 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.536711091Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.536921204Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=209.752µs 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.538541808Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.539122436Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=580.917µs 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.540197938Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 2026-03-09T21:17:05.564 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.540723813Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=526.015µs 2026-03-09T21:17:05.565 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.542413977Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 2026-03-09T21:17:05.565 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.543035821Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=621.624µs 2026-03-09T21:17:05.565 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.544180464Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 2026-03-09T21:17:05.565 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.544756231Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=575.427µs 2026-03-09T21:17:05.565 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.545953572Z level=info msg="Executing migration" id="Add column paused in alert_definition" 2026-03-09T21:17:05.565 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.547970368Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=2.016075ms 2026-03-09T21:17:05.565 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.54996422Z level=info msg="Executing migration" id="drop alert_definition table" 2026-03-09T21:17:05.565 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.550641078Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=676.747µs 2026-03-09T21:17:05.565 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.552089679Z level=info msg="Executing migration" id="delete alert_definition_version table" 2026-03-09T21:17:05.565 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.552331061Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=242.173µs 2026-03-09T21:17:05.565 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.553494929Z level=info msg="Executing migration" id="recreate alert_definition_version table" 2026-03-09T21:17:05.565 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.554155406Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=660.447µs 2026-03-09T21:17:05.565 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.556641049Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 2026-03-09T21:17:05.565 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.557468659Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=828.2µs 2026-03-09T21:17:05.565 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.558922961Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 2026-03-09T21:17:05.565 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.55949979Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=576.631µs 2026-03-09T21:17:05.565 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.560694497Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 2026-03-09T21:17:05.587 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:17:05 vm07 bash[55263]: ts=2026-03-09T21:17:05.168Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.00250488s 2026-03-09T21:17:05.813 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:05 vm10 bash[23387]: audit 2026-03-09T21:17:04.586691+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:05.813 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:05 vm10 bash[23387]: audit 2026-03-09T21:17:04.586691+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:05.813 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:05 vm10 bash[23387]: audit 2026-03-09T21:17:04.593266+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:05.813 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:05 vm10 bash[23387]: audit 2026-03-09T21:17:04.593266+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:05.813 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:05 vm10 bash[23387]: audit 2026-03-09T21:17:04.599395+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:05.813 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:05 vm10 bash[23387]: audit 2026-03-09T21:17:04.599395+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:05.813 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:05 vm10 bash[23387]: audit 2026-03-09T21:17:04.613796+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:05.813 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:05 vm10 bash[23387]: audit 2026-03-09T21:17:04.613796+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:05.813 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:05 vm10 bash[23387]: audit 2026-03-09T21:17:04.628920+0000 mon.c (mon.2) 42 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:17:05.813 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:05 vm10 bash[23387]: audit 2026-03-09T21:17:04.628920+0000 mon.c (mon.2) 42 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:17:05.813 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.562466624Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=1.772548ms 2026-03-09T21:17:05.813 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.564304124Z level=info msg="Executing migration" id="drop alert_definition_version table" 2026-03-09T21:17:05.813 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.564970091Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=665.415µs 2026-03-09T21:17:05.813 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.567250489Z level=info msg="Executing migration" id="create alert_instance table" 2026-03-09T21:17:05.813 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.567915043Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=664.395µs 2026-03-09T21:17:05.813 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.569107004Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 2026-03-09T21:17:05.813 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.569750579Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=643.525µs 2026-03-09T21:17:05.813 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.571499232Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 2026-03-09T21:17:05.813 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.57212293Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=623.617µs 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.573542708Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.575480596Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=1.937719ms 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.576745874Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.577327293Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=583.341µs 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.578994283Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.579581492Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=587.388µs 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.580689206Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.590649981Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=9.959252ms 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.592690352Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.601172209Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=8.480905ms 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.602803913Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.603378659Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=574.845µs 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.60480561Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.605300967Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=492.853µs 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.606908226Z level=info msg="Executing migration" id="add current_reason column related to current_state" 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.608774168Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=1.865602ms 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.609984986Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.611828156Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=1.842779ms 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.613125123Z level=info msg="Executing migration" id="create alert_rule table" 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.613656849Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=531.606µs 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.614876632Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.615384693Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=507.97µs 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.616941758Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.617432877Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=491.451µs 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.61878547Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.619383178Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=597.418µs 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.620999063Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.621059666Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=60.533µs 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.622335495Z level=info msg="Executing migration" id="add column for to alert_rule" 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.624173906Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=1.83817ms 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.62524981Z level=info msg="Executing migration" id="add column annotations to alert_rule" 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.627191585Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=1.941676ms 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.628739121Z level=info msg="Executing migration" id="add column labels to alert_rule" 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.631517122Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=2.764295ms 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.633456102Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.63422444Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=769.65µs 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.635547537Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.636232269Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=684.21µs 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.637503939Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.640582933Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=3.07149ms 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.642901022Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.645136937Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=2.236556ms 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.646686869Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.647423076Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=737.03µs 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.648885975Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.651446398Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=2.556345ms 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.653391991Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.656805019Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=3.403861ms 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.658602555Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.658665722Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=65.692µs 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.659991254Z level=info msg="Executing migration" id="create alert_rule_version table" 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.660781623Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=790.78µs 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.662634282Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.663537973Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=900.946µs 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.66505876Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.665789898Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=729.906µs 2026-03-09T21:17:05.814 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.667234182Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.667271151Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=37.751µs 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.668818278Z level=info msg="Executing migration" id="add column for to alert_rule_version" 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.671888454Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=3.065868ms 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.673581824Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.675762727Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=2.178899ms 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.6771849Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.679598628Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=2.414119ms 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.681440195Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.68433263Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=2.880382ms 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.685712051Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.688249962Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=2.534685ms 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.690655405Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.690710318Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=57.047µs 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.702946183Z level=info msg="Executing migration" id=create_alert_configuration_table 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.703632938Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=689.101µs 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.704991832Z level=info msg="Executing migration" id="Add column default in alert_configuration" 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.707558667Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=2.563107ms 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.708955051Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.709027627Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=72.576µs 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.710029202Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.71209563Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=2.065878ms 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.713672592Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.714163401Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=490.789µs 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.715275041Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.717532677Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=2.257336ms 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.718721583Z level=info msg="Executing migration" id=create_ngalert_configuration_table 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.719112775Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=390.931µs 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.720624746Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.721084406Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=457.376µs 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.722301785Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.724201591Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=1.899566ms 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.725429751Z level=info msg="Executing migration" id="create provenance_type table" 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.7258143Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=384.609µs 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.727260237Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.727708437Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=448.209µs 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.728855172Z level=info msg="Executing migration" id="create alert_image table" 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.729218664Z level=info msg="Migration successfully executed" id="create alert_image table" duration=363.481µs 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.730407418Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.730858463Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=451.496µs 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.732448068Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.732477203Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=31.088µs 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.733494668Z level=info msg="Executing migration" id=create_alert_configuration_history_table 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.73395523Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=460.291µs 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.735137604Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.735611731Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=473.847µs 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.736982967Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.737174485Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.738310041Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.738542807Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=232.476µs 2026-03-09T21:17:05.815 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.73954838Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.739984737Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=436.548µs 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.741859356Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.743882583Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=2.022778ms 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.744992721Z level=info msg="Executing migration" id="create library_element table v1" 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.745498157Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=505.096µs 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.746736015Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.747237173Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=501.358µs 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.74875318Z level=info msg="Executing migration" id="create library_element_connection table v1" 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.749122772Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=369.582µs 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.750308371Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.750763574Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=455.142µs 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.751933253Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.752359591Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=425.908µs 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.753969665Z level=info msg="Executing migration" id="increase max description length to 2048" 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.753983722Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=14.476µs 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.755073221Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.755100322Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=27.592µs 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.75596457Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.75615187Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=187.141µs 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.757078565Z level=info msg="Executing migration" id="create data_keys table" 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.757527064Z level=info msg="Migration successfully executed" id="create data_keys table" duration=448.378µs 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.758992027Z level=info msg="Executing migration" id="create secrets table" 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.759385783Z level=info msg="Migration successfully executed" id="create secrets table" duration=393.376µs 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.760582805Z level=info msg="Executing migration" id="rename data_keys name column to id" 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.770688011Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=10.104655ms 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.771934986Z level=info msg="Executing migration" id="add name column into data_keys" 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.774095279Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=2.160094ms 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.775541847Z level=info msg="Executing migration" id="copy data_keys id column values into name" 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.775651031Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=109.024µs 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.776546227Z level=info msg="Executing migration" id="rename data_keys name column to label" 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.786907903Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=10.362609ms 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.787997493Z level=info msg="Executing migration" id="rename data_keys id column back to name" 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.798012941Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=10.014697ms 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.799255528Z level=info msg="Executing migration" id="create kv_store table v1" 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.799679862Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=423.933µs 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.801114898Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.801602702Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=487.663µs 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.80272382Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.802988285Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=263.845µs 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.804152595Z level=info msg="Executing migration" id="create permission table" 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.804770722Z level=info msg="Migration successfully executed" id="create permission table" duration=618.177µs 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.80613182Z level=info msg="Executing migration" id="add unique index permission.role_id" 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.806771777Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=639.977µs 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.80858908Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.809210523Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=621.523µs 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.810580147Z level=info msg="Executing migration" id="create role table" 2026-03-09T21:17:05.816 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.811318178Z level=info msg="Migration successfully executed" id="create role table" duration=738.001µs 2026-03-09T21:17:05.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:05 vm07 bash[20771]: audit 2026-03-09T21:17:04.586691+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:05.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:05 vm07 bash[20771]: audit 2026-03-09T21:17:04.586691+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:05.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:05 vm07 bash[20771]: audit 2026-03-09T21:17:04.593266+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:05.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:05 vm07 bash[20771]: audit 2026-03-09T21:17:04.593266+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:05.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:05 vm07 bash[20771]: audit 2026-03-09T21:17:04.599395+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:05.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:05 vm07 bash[20771]: audit 2026-03-09T21:17:04.599395+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:05.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:05 vm07 bash[20771]: audit 2026-03-09T21:17:04.613796+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:05.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:05 vm07 bash[20771]: audit 2026-03-09T21:17:04.613796+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:05.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:05 vm07 bash[20771]: audit 2026-03-09T21:17:04.628920+0000 mon.c (mon.2) 42 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:17:05.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:05 vm07 bash[20771]: audit 2026-03-09T21:17:04.628920+0000 mon.c (mon.2) 42 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:17:05.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:05 vm07 bash[28052]: audit 2026-03-09T21:17:04.586691+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:05.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:05 vm07 bash[28052]: audit 2026-03-09T21:17:04.586691+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:05.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:05 vm07 bash[28052]: audit 2026-03-09T21:17:04.593266+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:05.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:05 vm07 bash[28052]: audit 2026-03-09T21:17:04.593266+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:05.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:05 vm07 bash[28052]: audit 2026-03-09T21:17:04.599395+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:05.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:05 vm07 bash[28052]: audit 2026-03-09T21:17:04.599395+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:05.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:05 vm07 bash[28052]: audit 2026-03-09T21:17:04.613796+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:05.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:05 vm07 bash[28052]: audit 2026-03-09T21:17:04.613796+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:05.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:05 vm07 bash[28052]: audit 2026-03-09T21:17:04.628920+0000 mon.c (mon.2) 42 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:17:05.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:05 vm07 bash[28052]: audit 2026-03-09T21:17:04.628920+0000 mon.c (mon.2) 42 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:17:06.118 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.81323729Z level=info msg="Executing migration" id="add column display_name" 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.816568877Z level=info msg="Migration successfully executed" id="add column display_name" duration=3.331196ms 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.817796536Z level=info msg="Executing migration" id="add column group_name" 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.820207248Z level=info msg="Migration successfully executed" id="add column group_name" duration=2.410272ms 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.821782617Z level=info msg="Executing migration" id="add index role.org_id" 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.822270159Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=487.802µs 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.823595761Z level=info msg="Executing migration" id="add unique index role_org_id_name" 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.824106848Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=511.287µs 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.825312124Z level=info msg="Executing migration" id="add index role_org_id_uid" 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.825831126Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=519.143µs 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.827529655Z level=info msg="Executing migration" id="create team role table" 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.827939271Z level=info msg="Migration successfully executed" id="create team role table" duration=409.817µs 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.829140972Z level=info msg="Executing migration" id="add index team_role.org_id" 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.829669621Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=530.352µs 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.830898261Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.831483397Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=584.905µs 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.833024211Z level=info msg="Executing migration" id="add index team_role.team_id" 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.833530248Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=506.066µs 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.836141746Z level=info msg="Executing migration" id="create user role table" 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.836632315Z level=info msg="Migration successfully executed" id="create user role table" duration=490.368µs 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.837985408Z level=info msg="Executing migration" id="add index user_role.org_id" 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.838518445Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=531.225µs 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.840065691Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.840672338Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=606.235µs 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.841990173Z level=info msg="Executing migration" id="add index user_role.user_id" 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.842488337Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=498.113µs 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.845087091Z level=info msg="Executing migration" id="create builtin role table" 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.845553014Z level=info msg="Migration successfully executed" id="create builtin role table" duration=465.543µs 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.847281049Z level=info msg="Executing migration" id="add index builtin_role.role_id" 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.847833483Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=552.154µs 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.848993013Z level=info msg="Executing migration" id="add index builtin_role.name" 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.849579832Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=586.117µs 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.850975144Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.853369325Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=2.394221ms 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.854413801Z level=info msg="Executing migration" id="add index builtin_role.org_id" 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.854909368Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=495.587µs 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.855975713Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.856472864Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=497µs 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.857653124Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.858130026Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=476.703µs 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.85913082Z level=info msg="Executing migration" id="add unique index role.uid" 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.859606301Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=475.14µs 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.860395748Z level=info msg="Executing migration" id="create seed assignment table" 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.860761823Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=366.005µs 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.861790909Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.862322866Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=532.397µs 2026-03-09T21:17:06.119 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.863622829Z level=info msg="Executing migration" id="add column hidden to role table" 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.865968971Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=2.345911ms 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.867203551Z level=info msg="Executing migration" id="permission kind migration" 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.869551747Z level=info msg="Migration successfully executed" id="permission kind migration" duration=2.346633ms 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.870515742Z level=info msg="Executing migration" id="permission attribute migration" 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.872771214Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=2.255393ms 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.873769983Z level=info msg="Executing migration" id="permission identifier migration" 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.876059139Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=2.29124ms 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.877245749Z level=info msg="Executing migration" id="add permission identifier index" 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.87772667Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=478.085µs 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.879028187Z level=info msg="Executing migration" id="add permission action scope role_id index" 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.879547538Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=519.392µs 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.880586244Z level=info msg="Executing migration" id="remove permission role_id action scope index" 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.881053518Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=467.014µs 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.882110156Z level=info msg="Executing migration" id="create query_history table v1" 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.882521085Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=410.849µs 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.883529884Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.883978173Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=447.989µs 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.884964339Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.884993023Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=29.556µs 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.88580353Z level=info msg="Executing migration" id="rbac disabled migrator" 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.885827134Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=24.275µs 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.886772013Z level=info msg="Executing migration" id="teams permissions migration" 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.88698956Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=217.286µs 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.887749743Z level=info msg="Executing migration" id="dashboard permissions" 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.888013507Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=264.276µs 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.888774161Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.889047482Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=275.865µs 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.890042275Z level=info msg="Executing migration" id="drop managed folder create actions" 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.890149996Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=107.662µs 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.891445101Z level=info msg="Executing migration" id="alerting notification permissions" 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.891689468Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=244.798µs 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.892463908Z level=info msg="Executing migration" id="create query_history_star table v1" 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.892810506Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=346.769µs 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.895454246Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.89610309Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=649.086µs 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.897390219Z level=info msg="Executing migration" id="add column org_id in query_history_star" 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.900222441Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=2.832272ms 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.901278638Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.901308765Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=28.553µs 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.902388786Z level=info msg="Executing migration" id="create correlation table v1" 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.902890576Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=501.55µs 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.904205777Z level=info msg="Executing migration" id="add index correlations.uid" 2026-03-09T21:17:06.120 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.904667753Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=461.695µs 2026-03-09T21:17:06.121 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.905716856Z level=info msg="Executing migration" id="add index correlations.source_uid" 2026-03-09T21:17:06.121 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.906208998Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=490.589µs 2026-03-09T21:17:06.121 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.958661305Z level=info msg="Executing migration" id="add correlation config column" 2026-03-09T21:17:06.121 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.96489329Z level=info msg="Migration successfully executed" id="add correlation config column" duration=6.230933ms 2026-03-09T21:17:06.121 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.968439108Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 2026-03-09T21:17:06.121 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.96909258Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=658.542µs 2026-03-09T21:17:06.121 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.982015752Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 2026-03-09T21:17:06.121 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.982840576Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=829.653µs 2026-03-09T21:17:06.121 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.984026857Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 2026-03-09T21:17:06.121 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:05 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:05.991609281Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=7.574459ms 2026-03-09T21:17:06.121 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.059077046Z level=info msg="Executing migration" id="create correlation v2" 2026-03-09T21:17:06.121 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.059819335Z level=info msg="Migration successfully executed" id="create correlation v2" duration=743.23µs 2026-03-09T21:17:06.373 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.118273157Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.119354472Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.085391ms 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.136842869Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.138107256Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.269085ms 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.139649613Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.140430625Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=781.614µs 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.141790761Z level=info msg="Executing migration" id="copy correlation v1 to v2" 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.142160493Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=370.042µs 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.14348364Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.144042887Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=559.377µs 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.145007983Z level=info msg="Executing migration" id="add provisioning column" 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.14750608Z level=info msg="Migration successfully executed" id="add provisioning column" duration=2.495051ms 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.148911771Z level=info msg="Executing migration" id="create entity_events table" 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.149507536Z level=info msg="Migration successfully executed" id="create entity_events table" duration=595.904µs 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.150827197Z level=info msg="Executing migration" id="create dashboard public config v1" 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.151465262Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=638.184µs 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.152769392Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.153094822Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.154100344Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.15444568Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.155639035Z level=info msg="Executing migration" id="Drop old dashboard public config table" 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.156254075Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=610.452µs 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.15723368Z level=info msg="Executing migration" id="recreate dashboard public config v1" 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.15799828Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=764.17µs 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.159324613Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.160017139Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=692.587µs 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.161257852Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.161959054Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=700.882µs 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.163264669Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.164019923Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=755.334µs 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.165122356Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-09T21:17:06.373 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.165853615Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=731.419µs 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.167246653Z level=info msg="Executing migration" id="Drop public config table" 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.167881792Z level=info msg="Migration successfully executed" id="Drop public config table" duration=635.46µs 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.168983194Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.169652276Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=668.481µs 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.170916943Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.171590103Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=673.501µs 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.172622025Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.17326024Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=636.18µs 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.174408439Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.175002561Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=594.152µs 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.17616633Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.184973897Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=8.801416ms 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.186427909Z level=info msg="Executing migration" id="add annotations_enabled column" 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.189221238Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=2.791435ms 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.19076099Z level=info msg="Executing migration" id="add time_selection_enabled column" 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.193792104Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=3.030222ms 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.194992081Z level=info msg="Executing migration" id="delete orphaned public dashboards" 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.195252878Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=260.988µs 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.196254664Z level=info msg="Executing migration" id="add share column" 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.198799798Z level=info msg="Migration successfully executed" id="add share column" duration=2.544634ms 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.199983614Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.200203325Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=219.741µs 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.201119951Z level=info msg="Executing migration" id="create file table" 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.201633723Z level=info msg="Migration successfully executed" id="create file table" duration=515.185µs 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.203533719Z level=info msg="Executing migration" id="file table idx: path natural pk" 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.204196961Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=663.602µs 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.205405113Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.206025474Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=621.093µs 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.20725175Z level=info msg="Executing migration" id="create file_meta table" 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.208594613Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.341691ms 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.209943839Z level=info msg="Executing migration" id="file table idx: path key" 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.210890762Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=946.572µs 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.212106557Z level=info msg="Executing migration" id="set path collation in file table" 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.212170327Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=64.091µs 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.214028665Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 2026-03-09T21:17:06.374 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.214121149Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=91.932µs 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.215458723Z level=info msg="Executing migration" id="managed permissions migration" 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.21599726Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=536.644µs 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.21705959Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.217633092Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=571.449µs 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.220005674Z level=info msg="Executing migration" id="RBAC action name migrator" 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.221514598Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.511269ms 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.222851421Z level=info msg="Executing migration" id="Add UID column to playlist" 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.230395433Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=7.541657ms 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.232109491Z level=info msg="Executing migration" id="Update uid column values in playlist" 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.232388704Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=280.805µs 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.233386803Z level=info msg="Executing migration" id="Add index for uid in playlist" 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.234141727Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=757.688µs 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.235288393Z level=info msg="Executing migration" id="update group index for alert rules" 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.235600706Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=312.835µs 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.236548361Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.236765137Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=216.966µs 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.2377018Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.238072143Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=370.243µs 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.239790309Z level=info msg="Executing migration" id="add action column to seed_assignment" 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.242930217Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=3.136331ms 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.244519212Z level=info msg="Executing migration" id="add scope column to seed_assignment" 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.247853843Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=3.329883ms 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.249185427Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.249925302Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=741.319µs 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.251070315Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.27753245Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=26.456304ms 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.279305831Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.280474708Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.1702ms 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.281667461Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.282405723Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=738.142µs 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.283662175Z level=info msg="Executing migration" id="add primary key to seed_assigment" 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.29204184Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=8.378284ms 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.293347796Z level=info msg="Executing migration" id="add origin column to seed_assignment" 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.296017012Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=2.668896ms 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.297112734Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.297353063Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=240.16µs 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.298637598Z level=info msg="Executing migration" id="prevent seeding OnCall access" 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.299048457Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=412.532µs 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.300122498Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.300366384Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=244.136µs 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.301207178Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.301408886Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=203.032µs 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.302428143Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.302663423Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=235.28µs 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.303713849Z level=info msg="Executing migration" id="create folder table" 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.304470617Z level=info msg="Migration successfully executed" id="create folder table" duration=756.275µs 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.305503961Z level=info msg="Executing migration" id="Add index for parent_uid" 2026-03-09T21:17:06.375 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.30625642Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=752.339µs 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.307501711Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.308165794Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=664.073µs 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.309351803Z level=info msg="Executing migration" id="Update folder title length" 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.3093657Z level=info msg="Migration successfully executed" id="Update folder title length" duration=14.207µs 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.310477351Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.311284312Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=806.59µs 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.312508292Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.313129095Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=620.502µs 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.314113628Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.314774875Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=661.037µs 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.316002344Z level=info msg="Executing migration" id="Sync dashboard and folder table" 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.316355274Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=352.569µs 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.317226595Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.317463378Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=236.944µs 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.319108006Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.319744729Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=639.277µs 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.320969191Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.321693566Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=725.006µs 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.323147859Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.323773591Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=625.871µs 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.325419993Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.326158295Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=738.142µs 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.327392345Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.328257786Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=865.531µs 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.329765236Z level=info msg="Executing migration" id="create anon_device table" 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.330416475Z level=info msg="Migration successfully executed" id="create anon_device table" duration=652.763µs 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.331702694Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.332623978Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=920.934µs 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.334052322Z level=info msg="Executing migration" id="add index anon_device.updated_at" 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.334769844Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=717.833µs 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.336424192Z level=info msg="Executing migration" id="create signing_key table" 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.337001412Z level=info msg="Migration successfully executed" id="create signing_key table" duration=577.01µs 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.338305063Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.338884577Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=577.541µs 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.340151209Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.340789032Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=637.854µs 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.342417821Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.342792623Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=375.303µs 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.345230045Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.34940989Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=4.17705ms 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.351157801Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.351689508Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=532.828µs 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.353009379Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.353716272Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=706.973µs 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.354975849Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.355721295Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=745.587µs 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.356746854Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.357356055Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=597.138µs 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.358868255Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.35954327Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=675.073µs 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.360548631Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.361115974Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=567.562µs 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.362674441Z level=info msg="Executing migration" id="create sso_setting table" 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.363270335Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=595.775µs 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.364557315Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.365090274Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=533.248µs 2026-03-09T21:17:06.376 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.3661091Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 2026-03-09T21:17:06.377 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.366343909Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=234.889µs 2026-03-09T21:17:06.377 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.367714114Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 2026-03-09T21:17:06.377 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.367780859Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=67.125µs 2026-03-09T21:17:06.377 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.369217017Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 2026-03-09T21:17:06.377 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.372889162Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=3.66984ms 2026-03-09T21:17:06.665 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:06 vm10 bash[23387]: cluster 2026-03-09T21:17:05.724970+0000 mgr.y (mgr.24416) 38 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:06.665 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:06 vm10 bash[23387]: cluster 2026-03-09T21:17:05.724970+0000 mgr.y (mgr.24416) 38 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.375223231Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.378579504Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=3.35425ms 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.380143151Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.38066601Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=524.092µs 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=migrator t=2026-03-09T21:17:06.382497819Z level=info msg="migrations completed" performed=547 skipped=0 duration=1.570437417s 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=sqlstore t=2026-03-09T21:17:06.383613527Z level=info msg="Created default organization" 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=secrets t=2026-03-09T21:17:06.384810338Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=plugin.store t=2026-03-09T21:17:06.393616221Z level=info msg="Loading plugins..." 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=local.finder t=2026-03-09T21:17:06.439164174Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=plugin.store t=2026-03-09T21:17:06.439188489Z level=info msg="Plugins loaded" count=55 duration=45.572198ms 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=query_data t=2026-03-09T21:17:06.443150807Z level=info msg="Query Service initialization" 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=live.push_http t=2026-03-09T21:17:06.445654514Z level=info msg="Live Push Gateway initialization" 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=ngalert.migration t=2026-03-09T21:17:06.45431779Z level=info msg=Starting 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=ngalert.migration t=2026-03-09T21:17:06.454720605Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=ngalert.migration orgID=1 t=2026-03-09T21:17:06.455136974Z level=info msg="Migrating alerts for organisation" 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=ngalert.migration orgID=1 t=2026-03-09T21:17:06.455538676Z level=info msg="Alerts found to migrate" alerts=0 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=ngalert.migration t=2026-03-09T21:17:06.456414566Z level=info msg="Completed alerting migration" 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=ngalert.state.manager t=2026-03-09T21:17:06.465806456Z level=info msg="Running in alternative execution of Error/NoData mode" 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=infra.usagestats.collector t=2026-03-09T21:17:06.467035648Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=provisioning.datasources t=2026-03-09T21:17:06.468273975Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=provisioning.datasources t=2026-03-09T21:17:06.474036021Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=provisioning.alerting t=2026-03-09T21:17:06.480693805Z level=info msg="starting to provision alerting" 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=provisioning.alerting t=2026-03-09T21:17:06.480708363Z level=info msg="finished to provision alerting" 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=grafanaStorageLogger t=2026-03-09T21:17:06.481111797Z level=info msg="Storage starting" 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=http.server t=2026-03-09T21:17:06.482845863Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=http.server t=2026-03-09T21:17:06.483221687Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=https subUrl= socket= 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=ngalert.state.manager t=2026-03-09T21:17:06.483356159Z level=info msg="Warming state cache for startup" 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=ngalert.state.manager t=2026-03-09T21:17:06.484881223Z level=info msg="State cache has been initialized" states=0 duration=1.526047ms 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=provisioning.dashboard t=2026-03-09T21:17:06.485848714Z level=info msg="starting to provision dashboards" 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=ngalert.multiorg.alertmanager t=2026-03-09T21:17:06.499183456Z level=info msg="Starting MultiOrg Alertmanager" 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=ngalert.scheduler t=2026-03-09T21:17:06.499556174Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=ticker t=2026-03-09T21:17:06.499982602Z level=info msg=starting first_tick=2026-03-09T21:17:10Z 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=sqlstore.transactions t=2026-03-09T21:17:06.49783896Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 2026-03-09T21:17:06.665 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=plugins.update.checker t=2026-03-09T21:17:06.56121758Z level=info msg="Update check succeeded" duration=63.251883ms 2026-03-09T21:17:06.666 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=grafana-apiserver t=2026-03-09T21:17:06.599116053Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 2026-03-09T21:17:06.666 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=grafana-apiserver t=2026-03-09T21:17:06.599662657Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 2026-03-09T21:17:06.666 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:06 vm10 bash[51199]: logger=provisioning.dashboard t=2026-03-09T21:17:06.664860744Z level=info msg="finished to provision dashboards" 2026-03-09T21:17:06.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:06 vm07 bash[20771]: cluster 2026-03-09T21:17:05.724970+0000 mgr.y (mgr.24416) 38 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:06.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:06 vm07 bash[20771]: cluster 2026-03-09T21:17:05.724970+0000 mgr.y (mgr.24416) 38 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:06.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:06 vm07 bash[28052]: cluster 2026-03-09T21:17:05.724970+0000 mgr.y (mgr.24416) 38 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:06.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:06 vm07 bash[28052]: cluster 2026-03-09T21:17:05.724970+0000 mgr.y (mgr.24416) 38 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:07.895 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:17:08.251 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:17:08.252 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":66,"fsid":"22c897f4-1bfc-11f1-adaa-13127443f8b3","created":"2026-03-09T21:09:52.807363+0000","modified":"2026-03-09T21:16:41.687140+0000","last_up_change":"2026-03-09T21:15:47.106059+0000","last_in_change":"2026-03-09T21:15:28.039101+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":6,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T21:12:50.641417+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":".rgw.root","create_time":"2026-03-09T21:16:08.262626+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"56","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":3,"pool_name":"default.rgw.log","create_time":"2026-03-09T21:16:10.323890+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"58","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":4,"pool_name":"datapool","create_time":"2026-03-09T21:16:11.380682+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":3,"pg_placement_num":3,"pg_placement_num_target":3,"pg_num_target":3,"pg_num_pending":3,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"64","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":64,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.6500000953674316,"score_stable":2.6500000953674316,"optimal_score":0.87999999523162842,"raw_score_acting":2.3299999237060547,"raw_score_stable":2.3299999237060547,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":5,"pool_name":"default.rgw.control","create_time":"2026-03-09T21:16:12.256033+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"60","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.25,"score_stable":1.25,"optimal_score":1,"raw_score_acting":1.25,"raw_score_stable":1.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":6,"pool_name":"default.rgw.meta","create_time":"2026-03-09T21:16:14.377166+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"62","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_autoscale_bias":4},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.75,"score_stable":1.75,"optimal_score":1,"raw_score_acting":1.75,"raw_score_stable":1.75,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"5ef293c2-89b5-4f27-a447-e0750ac5c165","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6801","nonce":2141296969}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6802","nonce":2141296969}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6804","nonce":2141296969}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6803","nonce":2141296969}]},"public_addr":"192.168.123.107:6801/2141296969","cluster_addr":"192.168.123.107:6802/2141296969","heartbeat_back_addr":"192.168.123.107:6804/2141296969","heartbeat_front_addr":"192.168.123.107:6803/2141296969","state":["exists","up"]},{"osd":1,"uuid":"98ca1795-9ed4-4ffb-8a3f-f26e615f554f","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6805","nonce":4103893323}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6806","nonce":4103893323}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6808","nonce":4103893323}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6807","nonce":4103893323}]},"public_addr":"192.168.123.107:6805/4103893323","cluster_addr":"192.168.123.107:6806/4103893323","heartbeat_back_addr":"192.168.123.107:6808/4103893323","heartbeat_front_addr":"192.168.123.107:6807/4103893323","state":["exists","up"]},{"osd":2,"uuid":"4a040af0-0bb5-4407-ba5f-64091d0e0685","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6809","nonce":2553486713}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6810","nonce":2553486713}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6812","nonce":2553486713}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6811","nonce":2553486713}]},"public_addr":"192.168.123.107:6809/2553486713","cluster_addr":"192.168.123.107:6810/2553486713","heartbeat_back_addr":"192.168.123.107:6812/2553486713","heartbeat_front_addr":"192.168.123.107:6811/2553486713","state":["exists","up"]},{"osd":3,"uuid":"82b53895-a55e-4a96-84b2-f1efa2657688","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":25,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6813","nonce":1113345127}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6814","nonce":1113345127}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6816","nonce":1113345127}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6815","nonce":1113345127}]},"public_addr":"192.168.123.107:6813/1113345127","cluster_addr":"192.168.123.107:6814/1113345127","heartbeat_back_addr":"192.168.123.107:6816/1113345127","heartbeat_front_addr":"192.168.123.107:6815/1113345127","state":["exists","up"]},{"osd":4,"uuid":"1a2754cd-a1ed-4d39-ac11-6dd3a5aa4512","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":30,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6800","nonce":4164782911}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6801","nonce":4164782911}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6803","nonce":4164782911}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6802","nonce":4164782911}]},"public_addr":"192.168.123.110:6800/4164782911","cluster_addr":"192.168.123.110:6801/4164782911","heartbeat_back_addr":"192.168.123.110:6803/4164782911","heartbeat_front_addr":"192.168.123.110:6802/4164782911","state":["exists","up"]},{"osd":5,"uuid":"94d2c197-ad39-4db0-9389-4183a78f1d0a","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":36,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6804","nonce":1216077544}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6805","nonce":1216077544}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6807","nonce":1216077544}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6806","nonce":1216077544}]},"public_addr":"192.168.123.110:6804/1216077544","cluster_addr":"192.168.123.110:6805/1216077544","heartbeat_back_addr":"192.168.123.110:6807/1216077544","heartbeat_front_addr":"192.168.123.110:6806/1216077544","state":["exists","up"]},{"osd":6,"uuid":"b9ca0fe4-bec8-42a3-9f19-f8c556e71c46","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":44,"up_thru":58,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6808","nonce":646422706}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6809","nonce":646422706}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6811","nonce":646422706}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6810","nonce":646422706}]},"public_addr":"192.168.123.110:6808/646422706","cluster_addr":"192.168.123.110:6809/646422706","heartbeat_back_addr":"192.168.123.110:6811/646422706","heartbeat_front_addr":"192.168.123.110:6810/646422706","state":["exists","up"]},{"osd":7,"uuid":"1f0752e8-2e42-4ee3-ac34-768b5409242e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":51,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6812","nonce":2049527874}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6813","nonce":2049527874}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6815","nonce":2049527874}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6814","nonce":2049527874}]},"public_addr":"192.168.123.110:6812/2049527874","cluster_addr":"192.168.123.110:6813/2049527874","heartbeat_back_addr":"192.168.123.110:6815/2049527874","heartbeat_front_addr":"192.168.123.110:6814/2049527874","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T21:11:38.098561+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T21:12:13.068960+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T21:12:47.193524+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T21:13:22.119872+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T21:13:56.136897+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T21:14:32.010601+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T21:15:08.102734+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T21:15:43.982513+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.107:0/2719081918":"2026-03-10T21:16:41.687111+0000","192.168.123.107:0/1185689761":"2026-03-10T21:10:04.153229+0000","192.168.123.107:0/2131061409":"2026-03-10T21:10:04.153229+0000","192.168.123.107:0/1110217514":"2026-03-10T21:10:14.562158+0000","192.168.123.107:6800/1914116107":"2026-03-10T21:10:14.562158+0000","192.168.123.107:6800/3796220318":"2026-03-10T21:16:41.687111+0000","192.168.123.107:0/985741243":"2026-03-10T21:10:04.153229+0000","192.168.123.107:0/4284336732":"2026-03-10T21:10:14.562158+0000","192.168.123.107:0/2870185589":"2026-03-10T21:16:41.687111+0000","192.168.123.107:6800/2970840566":"2026-03-10T21:10:04.153229+0000","192.168.123.107:0/3579976793":"2026-03-10T21:16:41.687111+0000","192.168.123.107:0/2503773312":"2026-03-10T21:16:41.687111+0000","192.168.123.107:0/761815837":"2026-03-10T21:10:14.562158+0000","192.168.123.107:0/1936835219":"2026-03-10T21:16:41.687111+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T21:17:08.266 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:07 vm07 bash[20771]: audit 2026-03-09T21:17:06.182742+0000 mgr.y (mgr.24416) 39 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:08.266 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:07 vm07 bash[20771]: audit 2026-03-09T21:17:06.182742+0000 mgr.y (mgr.24416) 39 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:08.266 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:07 vm07 bash[20771]: audit 2026-03-09T21:17:06.782493+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:08.266 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:07 vm07 bash[20771]: audit 2026-03-09T21:17:06.782493+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:08.266 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:07 vm07 bash[28052]: audit 2026-03-09T21:17:06.182742+0000 mgr.y (mgr.24416) 39 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:08.266 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:07 vm07 bash[28052]: audit 2026-03-09T21:17:06.182742+0000 mgr.y (mgr.24416) 39 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:08.266 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:07 vm07 bash[28052]: audit 2026-03-09T21:17:06.782493+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:08.266 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:07 vm07 bash[28052]: audit 2026-03-09T21:17:06.782493+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:08.330 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-09T21:17:08.330 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph osd dump --format=json 2026-03-09T21:17:08.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:07 vm10 bash[23387]: audit 2026-03-09T21:17:06.182742+0000 mgr.y (mgr.24416) 39 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:08.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:07 vm10 bash[23387]: audit 2026-03-09T21:17:06.182742+0000 mgr.y (mgr.24416) 39 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:08.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:07 vm10 bash[23387]: audit 2026-03-09T21:17:06.782493+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:08.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:07 vm10 bash[23387]: audit 2026-03-09T21:17:06.782493+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:08.967 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:17:08 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:17:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:17:09.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:08 vm07 bash[28052]: cluster 2026-03-09T21:17:07.725432+0000 mgr.y (mgr.24416) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:09.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:08 vm07 bash[28052]: cluster 2026-03-09T21:17:07.725432+0000 mgr.y (mgr.24416) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:09.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:08 vm07 bash[28052]: audit 2026-03-09T21:17:08.250336+0000 mon.c (mon.2) 43 : audit [DBG] from='client.? 192.168.123.107:0/1611991357' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T21:17:09.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:08 vm07 bash[28052]: audit 2026-03-09T21:17:08.250336+0000 mon.c (mon.2) 43 : audit [DBG] from='client.? 192.168.123.107:0/1611991357' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T21:17:09.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:08 vm07 bash[20771]: cluster 2026-03-09T21:17:07.725432+0000 mgr.y (mgr.24416) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:09.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:08 vm07 bash[20771]: cluster 2026-03-09T21:17:07.725432+0000 mgr.y (mgr.24416) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:09.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:08 vm07 bash[20771]: audit 2026-03-09T21:17:08.250336+0000 mon.c (mon.2) 43 : audit [DBG] from='client.? 192.168.123.107:0/1611991357' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T21:17:09.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:08 vm07 bash[20771]: audit 2026-03-09T21:17:08.250336+0000 mon.c (mon.2) 43 : audit [DBG] from='client.? 192.168.123.107:0/1611991357' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T21:17:09.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:08 vm10 bash[23387]: cluster 2026-03-09T21:17:07.725432+0000 mgr.y (mgr.24416) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:09.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:08 vm10 bash[23387]: cluster 2026-03-09T21:17:07.725432+0000 mgr.y (mgr.24416) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:09.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:08 vm10 bash[23387]: audit 2026-03-09T21:17:08.250336+0000 mon.c (mon.2) 43 : audit [DBG] from='client.? 192.168.123.107:0/1611991357' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T21:17:09.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:08 vm10 bash[23387]: audit 2026-03-09T21:17:08.250336+0000 mon.c (mon.2) 43 : audit [DBG] from='client.? 192.168.123.107:0/1611991357' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T21:17:10.551 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:10 vm07 bash[20771]: audit 2026-03-09T21:17:09.293180+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:10.551 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:10 vm07 bash[20771]: audit 2026-03-09T21:17:09.293180+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:10.551 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:10 vm07 bash[20771]: audit 2026-03-09T21:17:09.301149+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:10.551 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:10 vm07 bash[20771]: audit 2026-03-09T21:17:09.301149+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:10.551 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:10 vm07 bash[20771]: cluster 2026-03-09T21:17:09.725985+0000 mgr.y (mgr.24416) 41 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:10.551 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:10 vm07 bash[20771]: cluster 2026-03-09T21:17:09.725985+0000 mgr.y (mgr.24416) 41 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:10.551 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:10 vm07 bash[20771]: audit 2026-03-09T21:17:10.138576+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:10.551 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:10 vm07 bash[20771]: audit 2026-03-09T21:17:10.138576+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:10.551 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:10 vm07 bash[20771]: audit 2026-03-09T21:17:10.149211+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:10.551 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:10 vm07 bash[20771]: audit 2026-03-09T21:17:10.149211+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:10.551 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:10 vm07 bash[20771]: audit 2026-03-09T21:17:10.154185+0000 mon.c (mon.2) 44 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:17:10.551 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:10 vm07 bash[20771]: audit 2026-03-09T21:17:10.154185+0000 mon.c (mon.2) 44 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:17:10.551 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:10 vm07 bash[20771]: audit 2026-03-09T21:17:10.155330+0000 mon.c (mon.2) 45 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:17:10.551 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:10 vm07 bash[20771]: audit 2026-03-09T21:17:10.155330+0000 mon.c (mon.2) 45 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:17:10.551 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:10 vm07 bash[20771]: audit 2026-03-09T21:17:10.161646+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:10.551 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:10 vm07 bash[20771]: audit 2026-03-09T21:17:10.161646+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:10.552 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:10 vm07 bash[28052]: audit 2026-03-09T21:17:09.293180+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:10.552 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:10 vm07 bash[28052]: audit 2026-03-09T21:17:09.293180+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:10.552 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:10 vm07 bash[28052]: audit 2026-03-09T21:17:09.301149+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:10.552 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:10 vm07 bash[28052]: audit 2026-03-09T21:17:09.301149+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:10.552 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:10 vm07 bash[28052]: cluster 2026-03-09T21:17:09.725985+0000 mgr.y (mgr.24416) 41 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:10.552 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:10 vm07 bash[28052]: cluster 2026-03-09T21:17:09.725985+0000 mgr.y (mgr.24416) 41 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:10.552 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:10 vm07 bash[28052]: audit 2026-03-09T21:17:10.138576+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:10.552 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:10 vm07 bash[28052]: audit 2026-03-09T21:17:10.138576+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:10.552 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:10 vm07 bash[28052]: audit 2026-03-09T21:17:10.149211+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:10.552 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:10 vm07 bash[28052]: audit 2026-03-09T21:17:10.149211+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:10.552 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:10 vm07 bash[28052]: audit 2026-03-09T21:17:10.154185+0000 mon.c (mon.2) 44 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:17:10.552 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:10 vm07 bash[28052]: audit 2026-03-09T21:17:10.154185+0000 mon.c (mon.2) 44 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:17:10.552 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:10 vm07 bash[28052]: audit 2026-03-09T21:17:10.155330+0000 mon.c (mon.2) 45 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:17:10.552 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:10 vm07 bash[28052]: audit 2026-03-09T21:17:10.155330+0000 mon.c (mon.2) 45 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:17:10.552 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:10 vm07 bash[28052]: audit 2026-03-09T21:17:10.161646+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:10.552 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:10 vm07 bash[28052]: audit 2026-03-09T21:17:10.161646+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:10.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:10 vm10 bash[23387]: audit 2026-03-09T21:17:09.293180+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:10.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:10 vm10 bash[23387]: audit 2026-03-09T21:17:09.293180+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:10.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:10 vm10 bash[23387]: audit 2026-03-09T21:17:09.301149+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:10.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:10 vm10 bash[23387]: audit 2026-03-09T21:17:09.301149+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:10.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:10 vm10 bash[23387]: cluster 2026-03-09T21:17:09.725985+0000 mgr.y (mgr.24416) 41 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:10.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:10 vm10 bash[23387]: cluster 2026-03-09T21:17:09.725985+0000 mgr.y (mgr.24416) 41 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:10.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:10 vm10 bash[23387]: audit 2026-03-09T21:17:10.138576+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:10.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:10 vm10 bash[23387]: audit 2026-03-09T21:17:10.138576+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:10.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:10 vm10 bash[23387]: audit 2026-03-09T21:17:10.149211+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:10.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:10 vm10 bash[23387]: audit 2026-03-09T21:17:10.149211+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:10.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:10 vm10 bash[23387]: audit 2026-03-09T21:17:10.154185+0000 mon.c (mon.2) 44 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:17:10.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:10 vm10 bash[23387]: audit 2026-03-09T21:17:10.154185+0000 mon.c (mon.2) 44 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:17:10.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:10 vm10 bash[23387]: audit 2026-03-09T21:17:10.155330+0000 mon.c (mon.2) 45 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:17:10.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:10 vm10 bash[23387]: audit 2026-03-09T21:17:10.155330+0000 mon.c (mon.2) 45 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:17:10.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:10 vm10 bash[23387]: audit 2026-03-09T21:17:10.161646+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:10.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:10 vm10 bash[23387]: audit 2026-03-09T21:17:10.161646+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:10.865 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:17:10 vm07 systemd[1]: Stopping Ceph alertmanager.a for 22c897f4-1bfc-11f1-adaa-13127443f8b3... 2026-03-09T21:17:11.162 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:17:10 vm07 bash[55263]: ts=2026-03-09T21:17:10.911Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..." 2026-03-09T21:17:11.163 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:17:10 vm07 bash[56010]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3-alertmanager-a 2026-03-09T21:17:11.163 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:17:11 vm07 systemd[1]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@alertmanager.a.service: Deactivated successfully. 2026-03-09T21:17:11.163 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:17:11 vm07 systemd[1]: Stopped Ceph alertmanager.a for 22c897f4-1bfc-11f1-adaa-13127443f8b3. 2026-03-09T21:17:11.163 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:17:11 vm07 systemd[1]: Started Ceph alertmanager.a for 22c897f4-1bfc-11f1-adaa-13127443f8b3. 2026-03-09T21:17:11.163 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:17:11 vm07 bash[56094]: ts=2026-03-09T21:17:11.156Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)" 2026-03-09T21:17:11.163 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:17:11 vm07 bash[56094]: ts=2026-03-09T21:17:11.156Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)" 2026-03-09T21:17:11.163 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:17:11 vm07 bash[56094]: ts=2026-03-09T21:17:11.157Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.123.107 port=9094 2026-03-09T21:17:11.509 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:11 vm07 bash[20771]: cephadm 2026-03-09T21:17:10.179921+0000 mgr.y (mgr.24416) 42 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T21:17:11.509 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:11 vm07 bash[20771]: cephadm 2026-03-09T21:17:10.179921+0000 mgr.y (mgr.24416) 42 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T21:17:11.509 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:11 vm07 bash[20771]: cephadm 2026-03-09T21:17:10.184318+0000 mgr.y (mgr.24416) 43 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm07 2026-03-09T21:17:11.509 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:11 vm07 bash[20771]: cephadm 2026-03-09T21:17:10.184318+0000 mgr.y (mgr.24416) 43 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm07 2026-03-09T21:17:11.509 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:11 vm07 bash[20771]: audit 2026-03-09T21:17:11.043041+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:11.509 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:11 vm07 bash[20771]: audit 2026-03-09T21:17:11.043041+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:11.509 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:11 vm07 bash[20771]: audit 2026-03-09T21:17:11.051657+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:11.509 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:11 vm07 bash[20771]: audit 2026-03-09T21:17:11.051657+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:11.509 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:11 vm07 bash[28052]: cephadm 2026-03-09T21:17:10.179921+0000 mgr.y (mgr.24416) 42 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T21:17:11.509 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:11 vm07 bash[28052]: cephadm 2026-03-09T21:17:10.179921+0000 mgr.y (mgr.24416) 42 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T21:17:11.509 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:11 vm07 bash[28052]: cephadm 2026-03-09T21:17:10.184318+0000 mgr.y (mgr.24416) 43 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm07 2026-03-09T21:17:11.509 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:11 vm07 bash[28052]: cephadm 2026-03-09T21:17:10.184318+0000 mgr.y (mgr.24416) 43 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm07 2026-03-09T21:17:11.509 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:11 vm07 bash[28052]: audit 2026-03-09T21:17:11.043041+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:11.509 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:11 vm07 bash[28052]: audit 2026-03-09T21:17:11.043041+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:11.509 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:11 vm07 bash[28052]: audit 2026-03-09T21:17:11.051657+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:11.509 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:11 vm07 bash[28052]: audit 2026-03-09T21:17:11.051657+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:11.510 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:17:11 vm07 bash[56094]: ts=2026-03-09T21:17:11.162Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-09T21:17:11.510 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:17:11 vm07 bash[56094]: ts=2026-03-09T21:17:11.181Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T21:17:11.510 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:17:11 vm07 bash[56094]: ts=2026-03-09T21:17:11.181Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T21:17:11.510 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:17:11 vm07 bash[56094]: ts=2026-03-09T21:17:11.183Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9093 2026-03-09T21:17:11.510 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:17:11 vm07 bash[56094]: ts=2026-03-09T21:17:11.183Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9093 2026-03-09T21:17:11.585 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:11 vm10 bash[23387]: cephadm 2026-03-09T21:17:10.179921+0000 mgr.y (mgr.24416) 42 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T21:17:11.585 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:11 vm10 bash[23387]: cephadm 2026-03-09T21:17:10.179921+0000 mgr.y (mgr.24416) 42 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T21:17:11.585 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:11 vm10 bash[23387]: cephadm 2026-03-09T21:17:10.184318+0000 mgr.y (mgr.24416) 43 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm07 2026-03-09T21:17:11.585 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:11 vm10 bash[23387]: cephadm 2026-03-09T21:17:10.184318+0000 mgr.y (mgr.24416) 43 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm07 2026-03-09T21:17:11.585 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:11 vm10 bash[23387]: audit 2026-03-09T21:17:11.043041+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:11.585 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:11 vm10 bash[23387]: audit 2026-03-09T21:17:11.043041+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:11.585 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:11 vm10 bash[23387]: audit 2026-03-09T21:17:11.051657+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:11.585 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:11 vm10 bash[23387]: audit 2026-03-09T21:17:11.051657+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:11.838 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:11 vm10 systemd[1]: Stopping Ceph prometheus.a for 22c897f4-1bfc-11f1-adaa-13127443f8b3... 2026-03-09T21:17:11.838 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:11 vm10 bash[49946]: ts=2026-03-09T21:17:11.825Z caller=main.go:964 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-09T21:17:11.838 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:11 vm10 bash[49946]: ts=2026-03-09T21:17:11.825Z caller=main.go:988 level=info msg="Stopping scrape discovery manager..." 2026-03-09T21:17:11.838 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:11 vm10 bash[49946]: ts=2026-03-09T21:17:11.825Z caller=main.go:1002 level=info msg="Stopping notify discovery manager..." 2026-03-09T21:17:11.838 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:11 vm10 bash[49946]: ts=2026-03-09T21:17:11.825Z caller=manager.go:177 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-09T21:17:11.838 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:11 vm10 bash[49946]: ts=2026-03-09T21:17:11.825Z caller=manager.go:187 level=info component="rule manager" msg="Rule manager stopped" 2026-03-09T21:17:11.838 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:11 vm10 bash[49946]: ts=2026-03-09T21:17:11.825Z caller=main.go:1039 level=info msg="Stopping scrape manager..." 2026-03-09T21:17:11.838 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:11 vm10 bash[49946]: ts=2026-03-09T21:17:11.825Z caller=main.go:984 level=info msg="Scrape discovery manager stopped" 2026-03-09T21:17:11.838 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:11 vm10 bash[49946]: ts=2026-03-09T21:17:11.825Z caller=main.go:998 level=info msg="Notify discovery manager stopped" 2026-03-09T21:17:11.838 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:11 vm10 bash[49946]: ts=2026-03-09T21:17:11.825Z caller=main.go:1031 level=info msg="Scrape manager stopped" 2026-03-09T21:17:11.838 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:11 vm10 bash[49946]: ts=2026-03-09T21:17:11.826Z caller=notifier.go:618 level=info component=notifier msg="Stopping notification manager..." 2026-03-09T21:17:11.838 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:11 vm10 bash[49946]: ts=2026-03-09T21:17:11.826Z caller=main.go:1261 level=info msg="Notifier manager stopped" 2026-03-09T21:17:11.838 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:11 vm10 bash[49946]: ts=2026-03-09T21:17:11.826Z caller=main.go:1273 level=info msg="See you next time!" 2026-03-09T21:17:12.095 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:11 vm10 bash[51773]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3-prometheus-a 2026-03-09T21:17:12.095 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:11 vm10 systemd[1]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@prometheus.a.service: Deactivated successfully. 2026-03-09T21:17:12.095 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:11 vm10 systemd[1]: Stopped Ceph prometheus.a for 22c897f4-1bfc-11f1-adaa-13127443f8b3. 2026-03-09T21:17:12.095 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:11 vm10 systemd[1]: Started Ceph prometheus.a for 22c897f4-1bfc-11f1-adaa-13127443f8b3. 2026-03-09T21:17:12.095 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:12 vm10 bash[51847]: ts=2026-03-09T21:17:12.045Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-09T21:17:12.095 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:12 vm10 bash[51847]: ts=2026-03-09T21:17:12.045Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-09T21:17:12.095 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:12 vm10 bash[51847]: ts=2026-03-09T21:17:12.045Z caller=main.go:623 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm10 (none))" 2026-03-09T21:17:12.095 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:12 vm10 bash[51847]: ts=2026-03-09T21:17:12.045Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-09T21:17:12.095 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:12 vm10 bash[51847]: ts=2026-03-09T21:17:12.045Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-09T21:17:12.095 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:12 vm10 bash[51847]: ts=2026-03-09T21:17:12.046Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-09T21:17:12.095 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:12 vm10 bash[51847]: ts=2026-03-09T21:17:12.047Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-09T21:17:12.095 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:12 vm10 bash[51847]: ts=2026-03-09T21:17:12.048Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-09T21:17:12.095 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:12 vm10 bash[51847]: ts=2026-03-09T21:17:12.048Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-09T21:17:12.095 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:12 vm10 bash[51847]: ts=2026-03-09T21:17:12.048Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-09T21:17:12.095 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:12 vm10 bash[51847]: ts=2026-03-09T21:17:12.049Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.413µs 2026-03-09T21:17:12.095 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:12 vm10 bash[51847]: ts=2026-03-09T21:17:12.049Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-09T21:17:12.095 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:12 vm10 bash[51847]: ts=2026-03-09T21:17:12.049Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=1 2026-03-09T21:17:12.095 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:12 vm10 bash[51847]: ts=2026-03-09T21:17:12.049Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=1 maxSegment=1 2026-03-09T21:17:12.095 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:12 vm10 bash[51847]: ts=2026-03-09T21:17:12.049Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=24.827µs wal_replay_duration=883.374µs wbl_replay_duration=130ns total_replay_duration=919.851µs 2026-03-09T21:17:12.095 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:12 vm10 bash[51847]: ts=2026-03-09T21:17:12.070Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-09T21:17:12.095 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:12 vm10 bash[51847]: ts=2026-03-09T21:17:12.070Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-09T21:17:12.095 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:12 vm10 bash[51847]: ts=2026-03-09T21:17:12.070Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-09T21:17:12.334 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: cephadm 2026-03-09T21:17:11.056084+0000 mgr.y (mgr.24416) 44 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T21:17:12.335 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: cephadm 2026-03-09T21:17:11.056084+0000 mgr.y (mgr.24416) 44 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T21:17:12.335 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: cephadm 2026-03-09T21:17:11.242184+0000 mgr.y (mgr.24416) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm10 2026-03-09T21:17:12.335 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: cephadm 2026-03-09T21:17:11.242184+0000 mgr.y (mgr.24416) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm10 2026-03-09T21:17:12.335 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: cluster 2026-03-09T21:17:11.726356+0000 mgr.y (mgr.24416) 46 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:12.335 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: cluster 2026-03-09T21:17:11.726356+0000 mgr.y (mgr.24416) 46 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:12.335 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: audit 2026-03-09T21:17:11.797446+0000 mon.c (mon.2) 46 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:17:12.335 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: audit 2026-03-09T21:17:11.797446+0000 mon.c (mon.2) 46 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:17:12.335 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: audit 2026-03-09T21:17:11.920103+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:12.335 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: audit 2026-03-09T21:17:11.920103+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:12.336 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: audit 2026-03-09T21:17:11.926378+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:12.336 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: audit 2026-03-09T21:17:11.926378+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:12.336 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: audit 2026-03-09T21:17:11.930681+0000 mon.c (mon.2) 47 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T21:17:12.336 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: audit 2026-03-09T21:17:11.930681+0000 mon.c (mon.2) 47 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T21:17:12.336 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: audit 2026-03-09T21:17:11.932341+0000 mon.c (mon.2) 48 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm07.local:9093"}]: dispatch 2026-03-09T21:17:12.336 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: audit 2026-03-09T21:17:11.932341+0000 mon.c (mon.2) 48 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm07.local:9093"}]: dispatch 2026-03-09T21:17:12.336 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: audit 2026-03-09T21:17:11.938066+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:12.336 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: audit 2026-03-09T21:17:11.938066+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:12.336 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: audit 2026-03-09T21:17:11.946690+0000 mon.c (mon.2) 49 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T21:17:12.336 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: audit 2026-03-09T21:17:11.946690+0000 mon.c (mon.2) 49 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T21:17:12.337 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:17:11 vm07 bash[21040]: [09/Mar/2026:21:17:11] ENGINE Bus STOPPING 2026-03-09T21:17:12.345 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: cephadm 2026-03-09T21:17:11.056084+0000 mgr.y (mgr.24416) 44 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T21:17:12.346 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: cephadm 2026-03-09T21:17:11.056084+0000 mgr.y (mgr.24416) 44 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T21:17:12.346 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: cephadm 2026-03-09T21:17:11.242184+0000 mgr.y (mgr.24416) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm10 2026-03-09T21:17:12.346 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: cephadm 2026-03-09T21:17:11.242184+0000 mgr.y (mgr.24416) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm10 2026-03-09T21:17:12.346 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: cluster 2026-03-09T21:17:11.726356+0000 mgr.y (mgr.24416) 46 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:12.346 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: cluster 2026-03-09T21:17:11.726356+0000 mgr.y (mgr.24416) 46 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:12.346 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: audit 2026-03-09T21:17:11.797446+0000 mon.c (mon.2) 46 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:17:12.346 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: audit 2026-03-09T21:17:11.797446+0000 mon.c (mon.2) 46 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:17:12.346 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: audit 2026-03-09T21:17:11.920103+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:12.346 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: audit 2026-03-09T21:17:11.920103+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:12.346 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: audit 2026-03-09T21:17:11.926378+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:12.346 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: audit 2026-03-09T21:17:11.926378+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:12.346 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: audit 2026-03-09T21:17:11.930681+0000 mon.c (mon.2) 47 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T21:17:12.346 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: audit 2026-03-09T21:17:11.930681+0000 mon.c (mon.2) 47 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T21:17:12.346 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: audit 2026-03-09T21:17:11.932341+0000 mon.c (mon.2) 48 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm07.local:9093"}]: dispatch 2026-03-09T21:17:12.346 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: audit 2026-03-09T21:17:11.932341+0000 mon.c (mon.2) 48 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm07.local:9093"}]: dispatch 2026-03-09T21:17:12.346 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: audit 2026-03-09T21:17:11.938066+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:12.346 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: audit 2026-03-09T21:17:11.938066+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:12.346 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: audit 2026-03-09T21:17:11.946690+0000 mon.c (mon.2) 49 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T21:17:12.346 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:12 vm10 bash[51847]: ts=2026-03-09T21:17:12.090Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=19.920409ms db_storage=1.752µs remote_storage=2.424µs web_handler=1.402µs query_engine=2.174µs scrape=1.164098ms scrape_sd=161.141µs notify=10.781µs notify_sd=8.837µs rules=17.635533ms tracing=9.126µs 2026-03-09T21:17:12.346 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:12 vm10 bash[51847]: ts=2026-03-09T21:17:12.090Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-09T21:17:12.346 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:17:12 vm10 bash[51847]: ts=2026-03-09T21:17:12.090Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-09T21:17:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: cephadm 2026-03-09T21:17:11.056084+0000 mgr.y (mgr.24416) 44 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T21:17:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: cephadm 2026-03-09T21:17:11.056084+0000 mgr.y (mgr.24416) 44 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T21:17:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: cephadm 2026-03-09T21:17:11.242184+0000 mgr.y (mgr.24416) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm10 2026-03-09T21:17:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: cephadm 2026-03-09T21:17:11.242184+0000 mgr.y (mgr.24416) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm10 2026-03-09T21:17:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: cluster 2026-03-09T21:17:11.726356+0000 mgr.y (mgr.24416) 46 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: cluster 2026-03-09T21:17:11.726356+0000 mgr.y (mgr.24416) 46 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: audit 2026-03-09T21:17:11.797446+0000 mon.c (mon.2) 46 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:17:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: audit 2026-03-09T21:17:11.797446+0000 mon.c (mon.2) 46 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:17:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: audit 2026-03-09T21:17:11.920103+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: audit 2026-03-09T21:17:11.920103+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: audit 2026-03-09T21:17:11.926378+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: audit 2026-03-09T21:17:11.926378+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: audit 2026-03-09T21:17:11.930681+0000 mon.c (mon.2) 47 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T21:17:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: audit 2026-03-09T21:17:11.930681+0000 mon.c (mon.2) 47 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T21:17:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: audit 2026-03-09T21:17:11.932341+0000 mon.c (mon.2) 48 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm07.local:9093"}]: dispatch 2026-03-09T21:17:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: audit 2026-03-09T21:17:11.932341+0000 mon.c (mon.2) 48 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm07.local:9093"}]: dispatch 2026-03-09T21:17:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: audit 2026-03-09T21:17:11.938066+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: audit 2026-03-09T21:17:11.938066+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: audit 2026-03-09T21:17:11.946690+0000 mon.c (mon.2) 49 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T21:17:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: audit 2026-03-09T21:17:11.946690+0000 mon.c (mon.2) 49 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T21:17:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: audit 2026-03-09T21:17:11.948095+0000 mon.c (mon.2) 50 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm10.local:3000"}]: dispatch 2026-03-09T21:17:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: audit 2026-03-09T21:17:11.948095+0000 mon.c (mon.2) 50 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm10.local:3000"}]: dispatch 2026-03-09T21:17:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: audit 2026-03-09T21:17:11.952747+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: audit 2026-03-09T21:17:11.952747+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: audit 2026-03-09T21:17:11.963803+0000 mon.c (mon.2) 51 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: audit 2026-03-09T21:17:11.963803+0000 mon.c (mon.2) 51 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: audit 2026-03-09T21:17:11.965798+0000 mon.c (mon.2) 52 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm10.local:9095"}]: dispatch 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: audit 2026-03-09T21:17:11.965798+0000 mon.c (mon.2) 52 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm10.local:9095"}]: dispatch 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: audit 2026-03-09T21:17:11.971222+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: audit 2026-03-09T21:17:11.971222+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: audit 2026-03-09T21:17:12.015543+0000 mon.c (mon.2) 53 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:12 vm07 bash[20771]: audit 2026-03-09T21:17:12.015543+0000 mon.c (mon.2) 53 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: audit 2026-03-09T21:17:11.948095+0000 mon.c (mon.2) 50 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm10.local:3000"}]: dispatch 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: audit 2026-03-09T21:17:11.948095+0000 mon.c (mon.2) 50 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm10.local:3000"}]: dispatch 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: audit 2026-03-09T21:17:11.952747+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: audit 2026-03-09T21:17:11.952747+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: audit 2026-03-09T21:17:11.963803+0000 mon.c (mon.2) 51 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: audit 2026-03-09T21:17:11.963803+0000 mon.c (mon.2) 51 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: audit 2026-03-09T21:17:11.965798+0000 mon.c (mon.2) 52 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm10.local:9095"}]: dispatch 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: audit 2026-03-09T21:17:11.965798+0000 mon.c (mon.2) 52 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm10.local:9095"}]: dispatch 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: audit 2026-03-09T21:17:11.971222+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: audit 2026-03-09T21:17:11.971222+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: audit 2026-03-09T21:17:12.015543+0000 mon.c (mon.2) 53 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:12 vm07 bash[28052]: audit 2026-03-09T21:17:12.015543+0000 mon.c (mon.2) 53 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:17:12 vm07 bash[21040]: [09/Mar/2026:21:17:12] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:17:12 vm07 bash[21040]: [09/Mar/2026:21:17:12] ENGINE Bus STOPPED 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:17:12 vm07 bash[21040]: [09/Mar/2026:21:17:12] ENGINE Bus STARTING 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:17:12 vm07 bash[21040]: [09/Mar/2026:21:17:12] ENGINE Serving on http://:::9283 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:17:12 vm07 bash[21040]: [09/Mar/2026:21:17:12] ENGINE Bus STARTED 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:17:12 vm07 bash[21040]: [09/Mar/2026:21:17:12] ENGINE Bus STOPPING 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:17:12 vm07 bash[21040]: [09/Mar/2026:21:17:12] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:17:12 vm07 bash[21040]: [09/Mar/2026:21:17:12] ENGINE Bus STOPPED 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:17:12 vm07 bash[21040]: [09/Mar/2026:21:17:12] ENGINE Bus STARTING 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:17:12 vm07 bash[21040]: [09/Mar/2026:21:17:12] ENGINE Serving on http://:::9283 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:17:12 vm07 bash[21040]: [09/Mar/2026:21:17:12] ENGINE Bus STARTED 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:17:12 vm07 bash[21040]: [09/Mar/2026:21:17:12] ENGINE Bus STOPPING 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:17:12 vm07 bash[21040]: [09/Mar/2026:21:17:12] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:17:12 vm07 bash[21040]: [09/Mar/2026:21:17:12] ENGINE Bus STOPPED 2026-03-09T21:17:12.616 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:17:12 vm07 bash[21040]: [09/Mar/2026:21:17:12] ENGINE Bus STARTING 2026-03-09T21:17:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: audit 2026-03-09T21:17:11.946690+0000 mon.c (mon.2) 49 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T21:17:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: audit 2026-03-09T21:17:11.948095+0000 mon.c (mon.2) 50 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm10.local:3000"}]: dispatch 2026-03-09T21:17:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: audit 2026-03-09T21:17:11.948095+0000 mon.c (mon.2) 50 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm10.local:3000"}]: dispatch 2026-03-09T21:17:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: audit 2026-03-09T21:17:11.952747+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: audit 2026-03-09T21:17:11.952747+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: audit 2026-03-09T21:17:11.963803+0000 mon.c (mon.2) 51 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T21:17:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: audit 2026-03-09T21:17:11.963803+0000 mon.c (mon.2) 51 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T21:17:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: audit 2026-03-09T21:17:11.965798+0000 mon.c (mon.2) 52 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm10.local:9095"}]: dispatch 2026-03-09T21:17:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: audit 2026-03-09T21:17:11.965798+0000 mon.c (mon.2) 52 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm10.local:9095"}]: dispatch 2026-03-09T21:17:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: audit 2026-03-09T21:17:11.971222+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: audit 2026-03-09T21:17:11.971222+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: audit 2026-03-09T21:17:12.015543+0000 mon.c (mon.2) 53 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:17:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:12 vm10 bash[23387]: audit 2026-03-09T21:17:12.015543+0000 mon.c (mon.2) 53 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:17:12.938 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:17:12.957 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:17:12 vm07 bash[21040]: [09/Mar/2026:21:17:12] ENGINE Serving on http://:::9283 2026-03-09T21:17:12.957 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:17:12 vm07 bash[21040]: [09/Mar/2026:21:17:12] ENGINE Bus STARTED 2026-03-09T21:17:13.232 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:17:13.232 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":66,"fsid":"22c897f4-1bfc-11f1-adaa-13127443f8b3","created":"2026-03-09T21:09:52.807363+0000","modified":"2026-03-09T21:16:41.687140+0000","last_up_change":"2026-03-09T21:15:47.106059+0000","last_in_change":"2026-03-09T21:15:28.039101+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":6,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T21:12:50.641417+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":".rgw.root","create_time":"2026-03-09T21:16:08.262626+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"56","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":3,"pool_name":"default.rgw.log","create_time":"2026-03-09T21:16:10.323890+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"58","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":4,"pool_name":"datapool","create_time":"2026-03-09T21:16:11.380682+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":3,"pg_placement_num":3,"pg_placement_num_target":3,"pg_num_target":3,"pg_num_pending":3,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"64","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":64,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.6500000953674316,"score_stable":2.6500000953674316,"optimal_score":0.87999999523162842,"raw_score_acting":2.3299999237060547,"raw_score_stable":2.3299999237060547,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":5,"pool_name":"default.rgw.control","create_time":"2026-03-09T21:16:12.256033+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"60","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.25,"score_stable":1.25,"optimal_score":1,"raw_score_acting":1.25,"raw_score_stable":1.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":6,"pool_name":"default.rgw.meta","create_time":"2026-03-09T21:16:14.377166+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"62","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_autoscale_bias":4},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.75,"score_stable":1.75,"optimal_score":1,"raw_score_acting":1.75,"raw_score_stable":1.75,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"5ef293c2-89b5-4f27-a447-e0750ac5c165","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6801","nonce":2141296969}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6802","nonce":2141296969}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6804","nonce":2141296969}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6803","nonce":2141296969}]},"public_addr":"192.168.123.107:6801/2141296969","cluster_addr":"192.168.123.107:6802/2141296969","heartbeat_back_addr":"192.168.123.107:6804/2141296969","heartbeat_front_addr":"192.168.123.107:6803/2141296969","state":["exists","up"]},{"osd":1,"uuid":"98ca1795-9ed4-4ffb-8a3f-f26e615f554f","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6805","nonce":4103893323}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6806","nonce":4103893323}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6808","nonce":4103893323}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6807","nonce":4103893323}]},"public_addr":"192.168.123.107:6805/4103893323","cluster_addr":"192.168.123.107:6806/4103893323","heartbeat_back_addr":"192.168.123.107:6808/4103893323","heartbeat_front_addr":"192.168.123.107:6807/4103893323","state":["exists","up"]},{"osd":2,"uuid":"4a040af0-0bb5-4407-ba5f-64091d0e0685","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6809","nonce":2553486713}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6810","nonce":2553486713}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6812","nonce":2553486713}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6811","nonce":2553486713}]},"public_addr":"192.168.123.107:6809/2553486713","cluster_addr":"192.168.123.107:6810/2553486713","heartbeat_back_addr":"192.168.123.107:6812/2553486713","heartbeat_front_addr":"192.168.123.107:6811/2553486713","state":["exists","up"]},{"osd":3,"uuid":"82b53895-a55e-4a96-84b2-f1efa2657688","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":25,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6813","nonce":1113345127}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6814","nonce":1113345127}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6816","nonce":1113345127}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6815","nonce":1113345127}]},"public_addr":"192.168.123.107:6813/1113345127","cluster_addr":"192.168.123.107:6814/1113345127","heartbeat_back_addr":"192.168.123.107:6816/1113345127","heartbeat_front_addr":"192.168.123.107:6815/1113345127","state":["exists","up"]},{"osd":4,"uuid":"1a2754cd-a1ed-4d39-ac11-6dd3a5aa4512","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":30,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6800","nonce":4164782911}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6801","nonce":4164782911}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6803","nonce":4164782911}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6802","nonce":4164782911}]},"public_addr":"192.168.123.110:6800/4164782911","cluster_addr":"192.168.123.110:6801/4164782911","heartbeat_back_addr":"192.168.123.110:6803/4164782911","heartbeat_front_addr":"192.168.123.110:6802/4164782911","state":["exists","up"]},{"osd":5,"uuid":"94d2c197-ad39-4db0-9389-4183a78f1d0a","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":36,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6804","nonce":1216077544}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6805","nonce":1216077544}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6807","nonce":1216077544}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6806","nonce":1216077544}]},"public_addr":"192.168.123.110:6804/1216077544","cluster_addr":"192.168.123.110:6805/1216077544","heartbeat_back_addr":"192.168.123.110:6807/1216077544","heartbeat_front_addr":"192.168.123.110:6806/1216077544","state":["exists","up"]},{"osd":6,"uuid":"b9ca0fe4-bec8-42a3-9f19-f8c556e71c46","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":44,"up_thru":58,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6808","nonce":646422706}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6809","nonce":646422706}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6811","nonce":646422706}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6810","nonce":646422706}]},"public_addr":"192.168.123.110:6808/646422706","cluster_addr":"192.168.123.110:6809/646422706","heartbeat_back_addr":"192.168.123.110:6811/646422706","heartbeat_front_addr":"192.168.123.110:6810/646422706","state":["exists","up"]},{"osd":7,"uuid":"1f0752e8-2e42-4ee3-ac34-768b5409242e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":51,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6812","nonce":2049527874}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6813","nonce":2049527874}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6815","nonce":2049527874}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6814","nonce":2049527874}]},"public_addr":"192.168.123.110:6812/2049527874","cluster_addr":"192.168.123.110:6813/2049527874","heartbeat_back_addr":"192.168.123.110:6815/2049527874","heartbeat_front_addr":"192.168.123.110:6814/2049527874","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T21:11:38.098561+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T21:12:13.068960+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T21:12:47.193524+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T21:13:22.119872+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T21:13:56.136897+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T21:14:32.010601+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T21:15:08.102734+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T21:15:43.982513+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.107:0/2719081918":"2026-03-10T21:16:41.687111+0000","192.168.123.107:0/1185689761":"2026-03-10T21:10:04.153229+0000","192.168.123.107:0/2131061409":"2026-03-10T21:10:04.153229+0000","192.168.123.107:0/1110217514":"2026-03-10T21:10:14.562158+0000","192.168.123.107:6800/1914116107":"2026-03-10T21:10:14.562158+0000","192.168.123.107:6800/3796220318":"2026-03-10T21:16:41.687111+0000","192.168.123.107:0/985741243":"2026-03-10T21:10:04.153229+0000","192.168.123.107:0/4284336732":"2026-03-10T21:10:14.562158+0000","192.168.123.107:0/2870185589":"2026-03-10T21:16:41.687111+0000","192.168.123.107:6800/2970840566":"2026-03-10T21:10:04.153229+0000","192.168.123.107:0/3579976793":"2026-03-10T21:16:41.687111+0000","192.168.123.107:0/2503773312":"2026-03-10T21:16:41.687111+0000","192.168.123.107:0/761815837":"2026-03-10T21:10:14.562158+0000","192.168.123.107:0/1936835219":"2026-03-10T21:16:41.687111+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T21:17:13.244 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:17:13 vm07 bash[56094]: ts=2026-03-09T21:17:13.163Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000811786s 2026-03-09T21:17:13.292 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph tell osd.0 flush_pg_stats 2026-03-09T21:17:13.293 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph tell osd.1 flush_pg_stats 2026-03-09T21:17:13.293 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph tell osd.2 flush_pg_stats 2026-03-09T21:17:13.293 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph tell osd.3 flush_pg_stats 2026-03-09T21:17:13.293 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph tell osd.4 flush_pg_stats 2026-03-09T21:17:13.293 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph tell osd.5 flush_pg_stats 2026-03-09T21:17:13.293 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph tell osd.6 flush_pg_stats 2026-03-09T21:17:13.293 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph tell osd.7 flush_pg_stats 2026-03-09T21:17:13.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:13 vm07 bash[20771]: audit 2026-03-09T21:17:11.931476+0000 mgr.y (mgr.24416) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T21:17:13.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:13 vm07 bash[20771]: audit 2026-03-09T21:17:11.931476+0000 mgr.y (mgr.24416) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T21:17:13.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:13 vm07 bash[20771]: audit 2026-03-09T21:17:11.932842+0000 mgr.y (mgr.24416) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm07.local:9093"}]: dispatch 2026-03-09T21:17:13.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:13 vm07 bash[20771]: audit 2026-03-09T21:17:11.932842+0000 mgr.y (mgr.24416) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm07.local:9093"}]: dispatch 2026-03-09T21:17:13.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:13 vm07 bash[20771]: audit 2026-03-09T21:17:11.947354+0000 mgr.y (mgr.24416) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T21:17:13.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:13 vm07 bash[20771]: audit 2026-03-09T21:17:11.947354+0000 mgr.y (mgr.24416) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T21:17:13.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:13 vm07 bash[20771]: audit 2026-03-09T21:17:11.948709+0000 mgr.y (mgr.24416) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm10.local:3000"}]: dispatch 2026-03-09T21:17:13.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:13 vm07 bash[20771]: audit 2026-03-09T21:17:11.948709+0000 mgr.y (mgr.24416) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm10.local:3000"}]: dispatch 2026-03-09T21:17:13.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:13 vm07 bash[20771]: audit 2026-03-09T21:17:11.964795+0000 mgr.y (mgr.24416) 51 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T21:17:13.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:13 vm07 bash[20771]: audit 2026-03-09T21:17:11.964795+0000 mgr.y (mgr.24416) 51 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T21:17:13.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:13 vm07 bash[20771]: audit 2026-03-09T21:17:11.966612+0000 mgr.y (mgr.24416) 52 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm10.local:9095"}]: dispatch 2026-03-09T21:17:13.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:13 vm07 bash[20771]: audit 2026-03-09T21:17:11.966612+0000 mgr.y (mgr.24416) 52 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm10.local:9095"}]: dispatch 2026-03-09T21:17:13.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:13 vm07 bash[20771]: audit 2026-03-09T21:17:13.231844+0000 mon.a (mon.0) 793 : audit [DBG] from='client.? 192.168.123.107:0/3975300491' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T21:17:13.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:13 vm07 bash[20771]: audit 2026-03-09T21:17:13.231844+0000 mon.a (mon.0) 793 : audit [DBG] from='client.? 192.168.123.107:0/3975300491' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T21:17:13.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:13 vm07 bash[28052]: audit 2026-03-09T21:17:11.931476+0000 mgr.y (mgr.24416) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T21:17:13.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:13 vm07 bash[28052]: audit 2026-03-09T21:17:11.931476+0000 mgr.y (mgr.24416) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T21:17:13.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:13 vm07 bash[28052]: audit 2026-03-09T21:17:11.932842+0000 mgr.y (mgr.24416) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm07.local:9093"}]: dispatch 2026-03-09T21:17:13.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:13 vm07 bash[28052]: audit 2026-03-09T21:17:11.932842+0000 mgr.y (mgr.24416) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm07.local:9093"}]: dispatch 2026-03-09T21:17:13.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:13 vm07 bash[28052]: audit 2026-03-09T21:17:11.947354+0000 mgr.y (mgr.24416) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T21:17:13.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:13 vm07 bash[28052]: audit 2026-03-09T21:17:11.947354+0000 mgr.y (mgr.24416) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T21:17:13.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:13 vm07 bash[28052]: audit 2026-03-09T21:17:11.948709+0000 mgr.y (mgr.24416) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm10.local:3000"}]: dispatch 2026-03-09T21:17:13.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:13 vm07 bash[28052]: audit 2026-03-09T21:17:11.948709+0000 mgr.y (mgr.24416) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm10.local:3000"}]: dispatch 2026-03-09T21:17:13.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:13 vm07 bash[28052]: audit 2026-03-09T21:17:11.964795+0000 mgr.y (mgr.24416) 51 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T21:17:13.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:13 vm07 bash[28052]: audit 2026-03-09T21:17:11.964795+0000 mgr.y (mgr.24416) 51 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T21:17:13.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:13 vm07 bash[28052]: audit 2026-03-09T21:17:11.966612+0000 mgr.y (mgr.24416) 52 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm10.local:9095"}]: dispatch 2026-03-09T21:17:13.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:13 vm07 bash[28052]: audit 2026-03-09T21:17:11.966612+0000 mgr.y (mgr.24416) 52 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm10.local:9095"}]: dispatch 2026-03-09T21:17:13.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:13 vm07 bash[28052]: audit 2026-03-09T21:17:13.231844+0000 mon.a (mon.0) 793 : audit [DBG] from='client.? 192.168.123.107:0/3975300491' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T21:17:13.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:13 vm07 bash[28052]: audit 2026-03-09T21:17:13.231844+0000 mon.a (mon.0) 793 : audit [DBG] from='client.? 192.168.123.107:0/3975300491' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T21:17:13.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:13 vm10 bash[23387]: audit 2026-03-09T21:17:11.931476+0000 mgr.y (mgr.24416) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T21:17:13.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:13 vm10 bash[23387]: audit 2026-03-09T21:17:11.931476+0000 mgr.y (mgr.24416) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T21:17:13.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:13 vm10 bash[23387]: audit 2026-03-09T21:17:11.932842+0000 mgr.y (mgr.24416) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm07.local:9093"}]: dispatch 2026-03-09T21:17:13.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:13 vm10 bash[23387]: audit 2026-03-09T21:17:11.932842+0000 mgr.y (mgr.24416) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm07.local:9093"}]: dispatch 2026-03-09T21:17:13.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:13 vm10 bash[23387]: audit 2026-03-09T21:17:11.947354+0000 mgr.y (mgr.24416) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T21:17:13.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:13 vm10 bash[23387]: audit 2026-03-09T21:17:11.947354+0000 mgr.y (mgr.24416) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T21:17:13.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:13 vm10 bash[23387]: audit 2026-03-09T21:17:11.948709+0000 mgr.y (mgr.24416) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm10.local:3000"}]: dispatch 2026-03-09T21:17:13.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:13 vm10 bash[23387]: audit 2026-03-09T21:17:11.948709+0000 mgr.y (mgr.24416) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm10.local:3000"}]: dispatch 2026-03-09T21:17:13.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:13 vm10 bash[23387]: audit 2026-03-09T21:17:11.964795+0000 mgr.y (mgr.24416) 51 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T21:17:13.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:13 vm10 bash[23387]: audit 2026-03-09T21:17:11.964795+0000 mgr.y (mgr.24416) 51 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T21:17:13.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:13 vm10 bash[23387]: audit 2026-03-09T21:17:11.966612+0000 mgr.y (mgr.24416) 52 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm10.local:9095"}]: dispatch 2026-03-09T21:17:13.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:13 vm10 bash[23387]: audit 2026-03-09T21:17:11.966612+0000 mgr.y (mgr.24416) 52 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm10.local:9095"}]: dispatch 2026-03-09T21:17:13.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:13 vm10 bash[23387]: audit 2026-03-09T21:17:13.231844+0000 mon.a (mon.0) 793 : audit [DBG] from='client.? 192.168.123.107:0/3975300491' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T21:17:13.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:13 vm10 bash[23387]: audit 2026-03-09T21:17:13.231844+0000 mon.a (mon.0) 793 : audit [DBG] from='client.? 192.168.123.107:0/3975300491' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T21:17:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:14 vm07 bash[20771]: cluster 2026-03-09T21:17:13.727019+0000 mgr.y (mgr.24416) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:14 vm07 bash[20771]: cluster 2026-03-09T21:17:13.727019+0000 mgr.y (mgr.24416) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:14.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:14 vm07 bash[28052]: cluster 2026-03-09T21:17:13.727019+0000 mgr.y (mgr.24416) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:14.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:14 vm07 bash[28052]: cluster 2026-03-09T21:17:13.727019+0000 mgr.y (mgr.24416) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:14.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:14 vm10 bash[23387]: cluster 2026-03-09T21:17:13.727019+0000 mgr.y (mgr.24416) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:14.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:14 vm10 bash[23387]: cluster 2026-03-09T21:17:13.727019+0000 mgr.y (mgr.24416) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:16.692 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:17:16 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:17:17.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:16 vm07 bash[20771]: cluster 2026-03-09T21:17:15.727514+0000 mgr.y (mgr.24416) 54 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:17.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:16 vm07 bash[20771]: cluster 2026-03-09T21:17:15.727514+0000 mgr.y (mgr.24416) 54 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:17.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:16 vm07 bash[20771]: audit 2026-03-09T21:17:16.454285+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:17.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:16 vm07 bash[20771]: audit 2026-03-09T21:17:16.454285+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:17.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:16 vm07 bash[20771]: audit 2026-03-09T21:17:16.462371+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:17.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:16 vm07 bash[20771]: audit 2026-03-09T21:17:16.462371+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:17.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:16 vm07 bash[28052]: cluster 2026-03-09T21:17:15.727514+0000 mgr.y (mgr.24416) 54 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:17.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:16 vm07 bash[28052]: cluster 2026-03-09T21:17:15.727514+0000 mgr.y (mgr.24416) 54 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:17.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:16 vm07 bash[28052]: audit 2026-03-09T21:17:16.454285+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:17.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:16 vm07 bash[28052]: audit 2026-03-09T21:17:16.454285+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:17.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:16 vm07 bash[28052]: audit 2026-03-09T21:17:16.462371+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:17.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:16 vm07 bash[28052]: audit 2026-03-09T21:17:16.462371+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:17.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:16 vm10 bash[23387]: cluster 2026-03-09T21:17:15.727514+0000 mgr.y (mgr.24416) 54 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:17.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:16 vm10 bash[23387]: cluster 2026-03-09T21:17:15.727514+0000 mgr.y (mgr.24416) 54 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:17.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:16 vm10 bash[23387]: audit 2026-03-09T21:17:16.454285+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:17.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:16 vm10 bash[23387]: audit 2026-03-09T21:17:16.454285+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:17.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:16 vm10 bash[23387]: audit 2026-03-09T21:17:16.462371+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:17.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:16 vm10 bash[23387]: audit 2026-03-09T21:17:16.462371+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:18.001 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:17:18.002 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:17:18.004 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:17:18.005 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:17:18.006 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:17:18.006 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:17:18.010 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:17:18.011 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:17:18.095 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:17 vm07 bash[20771]: audit 2026-03-09T21:17:16.193746+0000 mgr.y (mgr.24416) 55 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:18.095 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:17 vm07 bash[20771]: audit 2026-03-09T21:17:16.193746+0000 mgr.y (mgr.24416) 55 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:18.095 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:17 vm07 bash[20771]: audit 2026-03-09T21:17:17.502110+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:18.095 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:17 vm07 bash[20771]: audit 2026-03-09T21:17:17.502110+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:18.095 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:17 vm07 bash[20771]: audit 2026-03-09T21:17:17.509403+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:18.095 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:17 vm07 bash[20771]: audit 2026-03-09T21:17:17.509403+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:18.095 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:17 vm07 bash[20771]: audit 2026-03-09T21:17:17.510449+0000 mon.c (mon.2) 54 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:17:18.095 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:17 vm07 bash[20771]: audit 2026-03-09T21:17:17.510449+0000 mon.c (mon.2) 54 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:17:18.095 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:17 vm07 bash[20771]: audit 2026-03-09T21:17:17.511346+0000 mon.c (mon.2) 55 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:17:18.095 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:17 vm07 bash[20771]: audit 2026-03-09T21:17:17.511346+0000 mon.c (mon.2) 55 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:17:18.095 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:17 vm07 bash[20771]: audit 2026-03-09T21:17:17.517405+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:18.095 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:17 vm07 bash[20771]: audit 2026-03-09T21:17:17.517405+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:18.096 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:17 vm07 bash[28052]: audit 2026-03-09T21:17:16.193746+0000 mgr.y (mgr.24416) 55 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:18.096 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:17 vm07 bash[28052]: audit 2026-03-09T21:17:16.193746+0000 mgr.y (mgr.24416) 55 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:18.096 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:17 vm07 bash[28052]: audit 2026-03-09T21:17:17.502110+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:18.096 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:17 vm07 bash[28052]: audit 2026-03-09T21:17:17.502110+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:18.096 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:17 vm07 bash[28052]: audit 2026-03-09T21:17:17.509403+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:18.096 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:17 vm07 bash[28052]: audit 2026-03-09T21:17:17.509403+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:18.096 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:17 vm07 bash[28052]: audit 2026-03-09T21:17:17.510449+0000 mon.c (mon.2) 54 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:17:18.096 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:17 vm07 bash[28052]: audit 2026-03-09T21:17:17.510449+0000 mon.c (mon.2) 54 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:17:18.096 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:17 vm07 bash[28052]: audit 2026-03-09T21:17:17.511346+0000 mon.c (mon.2) 55 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:17:18.096 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:17 vm07 bash[28052]: audit 2026-03-09T21:17:17.511346+0000 mon.c (mon.2) 55 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:17:18.096 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:17 vm07 bash[28052]: audit 2026-03-09T21:17:17.517405+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:18.096 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:17 vm07 bash[28052]: audit 2026-03-09T21:17:17.517405+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:18.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:17 vm10 bash[23387]: audit 2026-03-09T21:17:16.193746+0000 mgr.y (mgr.24416) 55 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:18.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:17 vm10 bash[23387]: audit 2026-03-09T21:17:16.193746+0000 mgr.y (mgr.24416) 55 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:18.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:17 vm10 bash[23387]: audit 2026-03-09T21:17:17.502110+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:18.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:17 vm10 bash[23387]: audit 2026-03-09T21:17:17.502110+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:18.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:17 vm10 bash[23387]: audit 2026-03-09T21:17:17.509403+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:18.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:17 vm10 bash[23387]: audit 2026-03-09T21:17:17.509403+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:18.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:17 vm10 bash[23387]: audit 2026-03-09T21:17:17.510449+0000 mon.c (mon.2) 54 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:17:18.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:17 vm10 bash[23387]: audit 2026-03-09T21:17:17.510449+0000 mon.c (mon.2) 54 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:17:18.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:17 vm10 bash[23387]: audit 2026-03-09T21:17:17.511346+0000 mon.c (mon.2) 55 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:17:18.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:17 vm10 bash[23387]: audit 2026-03-09T21:17:17.511346+0000 mon.c (mon.2) 55 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:17:18.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:17 vm10 bash[23387]: audit 2026-03-09T21:17:17.517405+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:18.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:17 vm10 bash[23387]: audit 2026-03-09T21:17:17.517405+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:17:18.697 INFO:teuthology.orchestra.run.vm07.stdout:107374182448 2026-03-09T21:17:18.697 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph osd last-stat-seq osd.3 2026-03-09T21:17:18.697 INFO:teuthology.orchestra.run.vm07.stdout:188978561051 2026-03-09T21:17:18.697 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph osd last-stat-seq osd.6 2026-03-09T21:17:18.738 INFO:teuthology.orchestra.run.vm07.stdout:219043332115 2026-03-09T21:17:18.738 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph osd last-stat-seq osd.7 2026-03-09T21:17:18.937 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:18 vm07 bash[20771]: cluster 2026-03-09T21:17:17.727920+0000 mgr.y (mgr.24416) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:18.937 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:18 vm07 bash[20771]: cluster 2026-03-09T21:17:17.727920+0000 mgr.y (mgr.24416) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:18.947 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:17:18 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:17:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:17:18.947 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:18 vm07 bash[28052]: cluster 2026-03-09T21:17:17.727920+0000 mgr.y (mgr.24416) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:18.947 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:18 vm07 bash[28052]: cluster 2026-03-09T21:17:17.727920+0000 mgr.y (mgr.24416) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:18.988 INFO:teuthology.orchestra.run.vm07.stdout:55834574910 2026-03-09T21:17:18.988 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph osd last-stat-seq osd.1 2026-03-09T21:17:19.102 INFO:teuthology.orchestra.run.vm07.stdout:34359738437 2026-03-09T21:17:19.102 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph osd last-stat-seq osd.0 2026-03-09T21:17:19.163 INFO:teuthology.orchestra.run.vm07.stdout:128849018922 2026-03-09T21:17:19.164 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph osd last-stat-seq osd.4 2026-03-09T21:17:19.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:18 vm10 bash[23387]: cluster 2026-03-09T21:17:17.727920+0000 mgr.y (mgr.24416) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:19.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:18 vm10 bash[23387]: cluster 2026-03-09T21:17:17.727920+0000 mgr.y (mgr.24416) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:19.194 INFO:teuthology.orchestra.run.vm07.stdout:77309411384 2026-03-09T21:17:19.194 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph osd last-stat-seq osd.2 2026-03-09T21:17:19.201 INFO:teuthology.orchestra.run.vm07.stdout:154618822690 2026-03-09T21:17:19.201 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph osd last-stat-seq osd.5 2026-03-09T21:17:21.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:20 vm07 bash[20771]: cluster 2026-03-09T21:17:19.728399+0000 mgr.y (mgr.24416) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T21:17:21.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:20 vm07 bash[20771]: cluster 2026-03-09T21:17:19.728399+0000 mgr.y (mgr.24416) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T21:17:21.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:20 vm07 bash[28052]: cluster 2026-03-09T21:17:19.728399+0000 mgr.y (mgr.24416) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T21:17:21.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:20 vm07 bash[28052]: cluster 2026-03-09T21:17:19.728399+0000 mgr.y (mgr.24416) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T21:17:21.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:20 vm10 bash[23387]: cluster 2026-03-09T21:17:19.728399+0000 mgr.y (mgr.24416) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T21:17:21.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:20 vm10 bash[23387]: cluster 2026-03-09T21:17:19.728399+0000 mgr.y (mgr.24416) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T21:17:21.615 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:17:21 vm07 bash[56094]: ts=2026-03-09T21:17:21.164Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.002020871s 2026-03-09T21:17:23.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:22 vm07 bash[20771]: cluster 2026-03-09T21:17:21.728784+0000 mgr.y (mgr.24416) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 0 op/s 2026-03-09T21:17:23.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:22 vm07 bash[20771]: cluster 2026-03-09T21:17:21.728784+0000 mgr.y (mgr.24416) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 0 op/s 2026-03-09T21:17:23.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:22 vm07 bash[28052]: cluster 2026-03-09T21:17:21.728784+0000 mgr.y (mgr.24416) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 0 op/s 2026-03-09T21:17:23.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:22 vm07 bash[28052]: cluster 2026-03-09T21:17:21.728784+0000 mgr.y (mgr.24416) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 0 op/s 2026-03-09T21:17:23.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:22 vm10 bash[23387]: cluster 2026-03-09T21:17:21.728784+0000 mgr.y (mgr.24416) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 0 op/s 2026-03-09T21:17:23.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:22 vm10 bash[23387]: cluster 2026-03-09T21:17:21.728784+0000 mgr.y (mgr.24416) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 0 op/s 2026-03-09T21:17:23.667 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:17:23.676 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:17:23.676 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:17:23.676 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:17:23.677 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:17:23.677 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:17:23.682 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:17:23.685 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:17:24.276 INFO:teuthology.orchestra.run.vm07.stdout:154618822691 2026-03-09T21:17:24.374 INFO:teuthology.orchestra.run.vm07.stdout:77309411385 2026-03-09T21:17:24.574 INFO:teuthology.orchestra.run.vm07.stdout:188978561052 2026-03-09T21:17:24.584 INFO:tasks.cephadm.ceph_manager.ceph:need seq 154618822690 got 154618822691 for osd.5 2026-03-09T21:17:24.584 DEBUG:teuthology.parallel:result is None 2026-03-09T21:17:24.634 INFO:tasks.cephadm.ceph_manager.ceph:need seq 77309411384 got 77309411385 for osd.2 2026-03-09T21:17:24.635 DEBUG:teuthology.parallel:result is None 2026-03-09T21:17:24.722 INFO:tasks.cephadm.ceph_manager.ceph:need seq 188978561051 got 188978561052 for osd.6 2026-03-09T21:17:24.722 DEBUG:teuthology.parallel:result is None 2026-03-09T21:17:24.832 INFO:teuthology.orchestra.run.vm07.stdout:34359738438 2026-03-09T21:17:24.861 INFO:teuthology.orchestra.run.vm07.stdout:128849018923 2026-03-09T21:17:24.866 INFO:teuthology.orchestra.run.vm07.stdout:55834574911 2026-03-09T21:17:24.962 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738437 got 34359738438 for osd.0 2026-03-09T21:17:24.962 DEBUG:teuthology.parallel:result is None 2026-03-09T21:17:24.964 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:24 vm07 bash[20771]: cluster 2026-03-09T21:17:23.729455+0000 mgr.y (mgr.24416) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:24.964 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:24 vm07 bash[20771]: cluster 2026-03-09T21:17:23.729455+0000 mgr.y (mgr.24416) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:24.964 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:24 vm07 bash[20771]: audit 2026-03-09T21:17:24.272711+0000 mon.c (mon.2) 56 : audit [DBG] from='client.? 192.168.123.107:0/4156937725' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T21:17:24.964 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:24 vm07 bash[20771]: audit 2026-03-09T21:17:24.272711+0000 mon.c (mon.2) 56 : audit [DBG] from='client.? 192.168.123.107:0/4156937725' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T21:17:24.964 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:24 vm07 bash[20771]: audit 2026-03-09T21:17:24.373330+0000 mon.c (mon.2) 57 : audit [DBG] from='client.? 192.168.123.107:0/2523043526' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T21:17:24.964 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:24 vm07 bash[20771]: audit 2026-03-09T21:17:24.373330+0000 mon.c (mon.2) 57 : audit [DBG] from='client.? 192.168.123.107:0/2523043526' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T21:17:24.964 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:24 vm07 bash[20771]: audit 2026-03-09T21:17:24.570126+0000 mon.a (mon.0) 799 : audit [DBG] from='client.? 192.168.123.107:0/898961749' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T21:17:24.964 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:24 vm07 bash[20771]: audit 2026-03-09T21:17:24.570126+0000 mon.a (mon.0) 799 : audit [DBG] from='client.? 192.168.123.107:0/898961749' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T21:17:24.964 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:24 vm07 bash[20771]: audit 2026-03-09T21:17:24.831845+0000 mon.b (mon.1) 38 : audit [DBG] from='client.? 192.168.123.107:0/467890572' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T21:17:24.964 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:24 vm07 bash[20771]: audit 2026-03-09T21:17:24.831845+0000 mon.b (mon.1) 38 : audit [DBG] from='client.? 192.168.123.107:0/467890572' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T21:17:24.964 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:24 vm07 bash[28052]: cluster 2026-03-09T21:17:23.729455+0000 mgr.y (mgr.24416) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:24.964 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:24 vm07 bash[28052]: cluster 2026-03-09T21:17:23.729455+0000 mgr.y (mgr.24416) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:24.965 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:24 vm07 bash[28052]: audit 2026-03-09T21:17:24.272711+0000 mon.c (mon.2) 56 : audit [DBG] from='client.? 192.168.123.107:0/4156937725' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T21:17:24.965 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:24 vm07 bash[28052]: audit 2026-03-09T21:17:24.272711+0000 mon.c (mon.2) 56 : audit [DBG] from='client.? 192.168.123.107:0/4156937725' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T21:17:24.965 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:24 vm07 bash[28052]: audit 2026-03-09T21:17:24.373330+0000 mon.c (mon.2) 57 : audit [DBG] from='client.? 192.168.123.107:0/2523043526' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T21:17:24.965 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:24 vm07 bash[28052]: audit 2026-03-09T21:17:24.373330+0000 mon.c (mon.2) 57 : audit [DBG] from='client.? 192.168.123.107:0/2523043526' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T21:17:24.965 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:24 vm07 bash[28052]: audit 2026-03-09T21:17:24.570126+0000 mon.a (mon.0) 799 : audit [DBG] from='client.? 192.168.123.107:0/898961749' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T21:17:24.965 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:24 vm07 bash[28052]: audit 2026-03-09T21:17:24.570126+0000 mon.a (mon.0) 799 : audit [DBG] from='client.? 192.168.123.107:0/898961749' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T21:17:24.965 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:24 vm07 bash[28052]: audit 2026-03-09T21:17:24.831845+0000 mon.b (mon.1) 38 : audit [DBG] from='client.? 192.168.123.107:0/467890572' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T21:17:24.965 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:24 vm07 bash[28052]: audit 2026-03-09T21:17:24.831845+0000 mon.b (mon.1) 38 : audit [DBG] from='client.? 192.168.123.107:0/467890572' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T21:17:25.025 INFO:tasks.cephadm.ceph_manager.ceph:need seq 128849018922 got 128849018923 for osd.4 2026-03-09T21:17:25.025 DEBUG:teuthology.parallel:result is None 2026-03-09T21:17:25.049 INFO:teuthology.orchestra.run.vm07.stdout:219043332116 2026-03-09T21:17:25.053 INFO:tasks.cephadm.ceph_manager.ceph:need seq 55834574910 got 55834574911 for osd.1 2026-03-09T21:17:25.053 DEBUG:teuthology.parallel:result is None 2026-03-09T21:17:25.056 INFO:teuthology.orchestra.run.vm07.stdout:107374182449 2026-03-09T21:17:25.143 INFO:tasks.cephadm.ceph_manager.ceph:need seq 219043332115 got 219043332116 for osd.7 2026-03-09T21:17:25.143 DEBUG:teuthology.parallel:result is None 2026-03-09T21:17:25.178 INFO:tasks.cephadm.ceph_manager.ceph:need seq 107374182448 got 107374182449 for osd.3 2026-03-09T21:17:25.178 DEBUG:teuthology.parallel:result is None 2026-03-09T21:17:25.178 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-09T21:17:25.178 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph pg dump --format=json 2026-03-09T21:17:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:24 vm10 bash[23387]: cluster 2026-03-09T21:17:23.729455+0000 mgr.y (mgr.24416) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:24 vm10 bash[23387]: cluster 2026-03-09T21:17:23.729455+0000 mgr.y (mgr.24416) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:24 vm10 bash[23387]: audit 2026-03-09T21:17:24.272711+0000 mon.c (mon.2) 56 : audit [DBG] from='client.? 192.168.123.107:0/4156937725' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T21:17:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:24 vm10 bash[23387]: audit 2026-03-09T21:17:24.272711+0000 mon.c (mon.2) 56 : audit [DBG] from='client.? 192.168.123.107:0/4156937725' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T21:17:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:24 vm10 bash[23387]: audit 2026-03-09T21:17:24.373330+0000 mon.c (mon.2) 57 : audit [DBG] from='client.? 192.168.123.107:0/2523043526' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T21:17:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:24 vm10 bash[23387]: audit 2026-03-09T21:17:24.373330+0000 mon.c (mon.2) 57 : audit [DBG] from='client.? 192.168.123.107:0/2523043526' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T21:17:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:24 vm10 bash[23387]: audit 2026-03-09T21:17:24.570126+0000 mon.a (mon.0) 799 : audit [DBG] from='client.? 192.168.123.107:0/898961749' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T21:17:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:24 vm10 bash[23387]: audit 2026-03-09T21:17:24.570126+0000 mon.a (mon.0) 799 : audit [DBG] from='client.? 192.168.123.107:0/898961749' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T21:17:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:24 vm10 bash[23387]: audit 2026-03-09T21:17:24.831845+0000 mon.b (mon.1) 38 : audit [DBG] from='client.? 192.168.123.107:0/467890572' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T21:17:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:24 vm10 bash[23387]: audit 2026-03-09T21:17:24.831845+0000 mon.b (mon.1) 38 : audit [DBG] from='client.? 192.168.123.107:0/467890572' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T21:17:26.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:25 vm07 bash[20771]: audit 2026-03-09T21:17:24.858834+0000 mon.c (mon.2) 58 : audit [DBG] from='client.? 192.168.123.107:0/4239769380' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T21:17:26.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:25 vm07 bash[20771]: audit 2026-03-09T21:17:24.858834+0000 mon.c (mon.2) 58 : audit [DBG] from='client.? 192.168.123.107:0/4239769380' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T21:17:26.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:25 vm07 bash[20771]: audit 2026-03-09T21:17:24.866729+0000 mon.a (mon.0) 800 : audit [DBG] from='client.? 192.168.123.107:0/1143290242' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T21:17:26.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:25 vm07 bash[20771]: audit 2026-03-09T21:17:24.866729+0000 mon.a (mon.0) 800 : audit [DBG] from='client.? 192.168.123.107:0/1143290242' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T21:17:26.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:25 vm07 bash[20771]: audit 2026-03-09T21:17:25.042481+0000 mon.a (mon.0) 801 : audit [DBG] from='client.? 192.168.123.107:0/986013964' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T21:17:26.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:25 vm07 bash[20771]: audit 2026-03-09T21:17:25.042481+0000 mon.a (mon.0) 801 : audit [DBG] from='client.? 192.168.123.107:0/986013964' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T21:17:26.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:25 vm07 bash[20771]: audit 2026-03-09T21:17:25.055202+0000 mon.a (mon.0) 802 : audit [DBG] from='client.? 192.168.123.107:0/823898337' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T21:17:26.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:25 vm07 bash[20771]: audit 2026-03-09T21:17:25.055202+0000 mon.a (mon.0) 802 : audit [DBG] from='client.? 192.168.123.107:0/823898337' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T21:17:26.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:25 vm07 bash[28052]: audit 2026-03-09T21:17:24.858834+0000 mon.c (mon.2) 58 : audit [DBG] from='client.? 192.168.123.107:0/4239769380' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T21:17:26.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:25 vm07 bash[28052]: audit 2026-03-09T21:17:24.858834+0000 mon.c (mon.2) 58 : audit [DBG] from='client.? 192.168.123.107:0/4239769380' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T21:17:26.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:25 vm07 bash[28052]: audit 2026-03-09T21:17:24.866729+0000 mon.a (mon.0) 800 : audit [DBG] from='client.? 192.168.123.107:0/1143290242' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T21:17:26.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:25 vm07 bash[28052]: audit 2026-03-09T21:17:24.866729+0000 mon.a (mon.0) 800 : audit [DBG] from='client.? 192.168.123.107:0/1143290242' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T21:17:26.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:25 vm07 bash[28052]: audit 2026-03-09T21:17:25.042481+0000 mon.a (mon.0) 801 : audit [DBG] from='client.? 192.168.123.107:0/986013964' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T21:17:26.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:25 vm07 bash[28052]: audit 2026-03-09T21:17:25.042481+0000 mon.a (mon.0) 801 : audit [DBG] from='client.? 192.168.123.107:0/986013964' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T21:17:26.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:25 vm07 bash[28052]: audit 2026-03-09T21:17:25.055202+0000 mon.a (mon.0) 802 : audit [DBG] from='client.? 192.168.123.107:0/823898337' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T21:17:26.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:25 vm07 bash[28052]: audit 2026-03-09T21:17:25.055202+0000 mon.a (mon.0) 802 : audit [DBG] from='client.? 192.168.123.107:0/823898337' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T21:17:26.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:25 vm10 bash[23387]: audit 2026-03-09T21:17:24.858834+0000 mon.c (mon.2) 58 : audit [DBG] from='client.? 192.168.123.107:0/4239769380' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T21:17:26.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:25 vm10 bash[23387]: audit 2026-03-09T21:17:24.858834+0000 mon.c (mon.2) 58 : audit [DBG] from='client.? 192.168.123.107:0/4239769380' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T21:17:26.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:25 vm10 bash[23387]: audit 2026-03-09T21:17:24.866729+0000 mon.a (mon.0) 800 : audit [DBG] from='client.? 192.168.123.107:0/1143290242' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T21:17:26.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:25 vm10 bash[23387]: audit 2026-03-09T21:17:24.866729+0000 mon.a (mon.0) 800 : audit [DBG] from='client.? 192.168.123.107:0/1143290242' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T21:17:26.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:25 vm10 bash[23387]: audit 2026-03-09T21:17:25.042481+0000 mon.a (mon.0) 801 : audit [DBG] from='client.? 192.168.123.107:0/986013964' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T21:17:26.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:25 vm10 bash[23387]: audit 2026-03-09T21:17:25.042481+0000 mon.a (mon.0) 801 : audit [DBG] from='client.? 192.168.123.107:0/986013964' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T21:17:26.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:25 vm10 bash[23387]: audit 2026-03-09T21:17:25.055202+0000 mon.a (mon.0) 802 : audit [DBG] from='client.? 192.168.123.107:0/823898337' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T21:17:26.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:25 vm10 bash[23387]: audit 2026-03-09T21:17:25.055202+0000 mon.a (mon.0) 802 : audit [DBG] from='client.? 192.168.123.107:0/823898337' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T21:17:26.692 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:17:26 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:17:27.114 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:26 vm07 bash[20771]: cluster 2026-03-09T21:17:25.729905+0000 mgr.y (mgr.24416) 60 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:27.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:26 vm07 bash[20771]: cluster 2026-03-09T21:17:25.729905+0000 mgr.y (mgr.24416) 60 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:27.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:26 vm07 bash[20771]: audit 2026-03-09T21:17:26.804481+0000 mon.c (mon.2) 59 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:17:27.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:26 vm07 bash[20771]: audit 2026-03-09T21:17:26.804481+0000 mon.c (mon.2) 59 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:17:27.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:26 vm07 bash[28052]: cluster 2026-03-09T21:17:25.729905+0000 mgr.y (mgr.24416) 60 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:27.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:26 vm07 bash[28052]: cluster 2026-03-09T21:17:25.729905+0000 mgr.y (mgr.24416) 60 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:27.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:26 vm07 bash[28052]: audit 2026-03-09T21:17:26.804481+0000 mon.c (mon.2) 59 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:17:27.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:26 vm07 bash[28052]: audit 2026-03-09T21:17:26.804481+0000 mon.c (mon.2) 59 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:17:27.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:26 vm10 bash[23387]: cluster 2026-03-09T21:17:25.729905+0000 mgr.y (mgr.24416) 60 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:27.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:26 vm10 bash[23387]: cluster 2026-03-09T21:17:25.729905+0000 mgr.y (mgr.24416) 60 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:27.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:26 vm10 bash[23387]: audit 2026-03-09T21:17:26.804481+0000 mon.c (mon.2) 59 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:17:27.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:26 vm10 bash[23387]: audit 2026-03-09T21:17:26.804481+0000 mon.c (mon.2) 59 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:17:28.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:27 vm10 bash[23387]: audit 2026-03-09T21:17:26.202700+0000 mgr.y (mgr.24416) 61 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:28.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:27 vm10 bash[23387]: audit 2026-03-09T21:17:26.202700+0000 mgr.y (mgr.24416) 61 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:28.364 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:27 vm07 bash[20771]: audit 2026-03-09T21:17:26.202700+0000 mgr.y (mgr.24416) 61 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:28.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:27 vm07 bash[20771]: audit 2026-03-09T21:17:26.202700+0000 mgr.y (mgr.24416) 61 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:28.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:27 vm07 bash[28052]: audit 2026-03-09T21:17:26.202700+0000 mgr.y (mgr.24416) 61 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:28.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:27 vm07 bash[28052]: audit 2026-03-09T21:17:26.202700+0000 mgr.y (mgr.24416) 61 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:28.969 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:28 vm07 bash[20771]: cluster 2026-03-09T21:17:27.730262+0000 mgr.y (mgr.24416) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:28.969 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:28 vm07 bash[20771]: cluster 2026-03-09T21:17:27.730262+0000 mgr.y (mgr.24416) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:28.970 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:17:28 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:17:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:17:29.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:28 vm10 bash[23387]: cluster 2026-03-09T21:17:27.730262+0000 mgr.y (mgr.24416) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:29.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:28 vm10 bash[23387]: cluster 2026-03-09T21:17:27.730262+0000 mgr.y (mgr.24416) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:29.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:28 vm07 bash[28052]: cluster 2026-03-09T21:17:27.730262+0000 mgr.y (mgr.24416) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:29.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:28 vm07 bash[28052]: cluster 2026-03-09T21:17:27.730262+0000 mgr.y (mgr.24416) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:29.878 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:17:30.168 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:17:30.169 INFO:teuthology.orchestra.run.vm07.stderr:dumped all 2026-03-09T21:17:30.230 INFO:teuthology.orchestra.run.vm07.stdout:{"pg_ready":true,"pg_map":{"version":27,"stamp":"2026-03-09T21:17:29.730408+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":465419,"num_objects":199,"num_object_clones":0,"num_object_copies":597,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":199,"num_whiteouts":0,"num_read":911,"num_read_kb":770,"num_write":505,"num_write_kb":629,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":538,"ondisk_log_size":538,"up":396,"acting":396,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":396,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":8,"kb":167739392,"kb_used":221292,"kb_used_data":6588,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167518100,"statfs":{"total":171765137408,"available":171538534400,"internally_reserved":0,"allocated":6746112,"data_stored":3422313,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12710,"internal_metadata":219663962},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":15,"num_read_kb":15,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.002701"},"pg_stats":[{"pgid":"6.1b","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.707276+0000","last_change":"2026-03-09T21:16:17.158946+0000","last_active":"2026-03-09T21:16:41.707276+0000","last_peered":"2026-03-09T21:16:41.707276+0000","last_clean":"2026-03-09T21:16:41.707276+0000","last_became_active":"2026-03-09T21:16:17.158240+0000","last_became_peered":"2026-03-09T21:16:17.158240+0000","last_unstale":"2026-03-09T21:16:41.707276+0000","last_undegraded":"2026-03-09T21:16:41.707276+0000","last_fullsized":"2026-03-09T21:16:41.707276+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:29:11.157141+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1f","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.225923+0000","last_change":"2026-03-09T21:16:10.226478+0000","last_active":"2026-03-09T21:16:42.225923+0000","last_peered":"2026-03-09T21:16:42.225923+0000","last_clean":"2026-03-09T21:16:42.225923+0000","last_became_active":"2026-03-09T21:16:10.226353+0000","last_became_peered":"2026-03-09T21:16:10.226353+0000","last_unstale":"2026-03-09T21:16:42.225923+0000","last_undegraded":"2026-03-09T21:16:42.225923+0000","last_fullsized":"2026-03-09T21:16:42.225923+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:42:13.558560+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,4],"acting":[0,7,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1e","version":"62'10","reported_seq":44,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.707322+0000","last_change":"2026-03-09T21:16:12.239920+0000","last_active":"2026-03-09T21:16:41.707322+0000","last_peered":"2026-03-09T21:16:41.707322+0000","last_clean":"2026-03-09T21:16:41.707322+0000","last_became_active":"2026-03-09T21:16:12.239835+0000","last_became_peered":"2026-03-09T21:16:12.239835+0000","last_unstale":"2026-03-09T21:16:41.707322+0000","last_undegraded":"2026-03-09T21:16:41.707322+0000","last_fullsized":"2026-03-09T21:16:41.707322+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:10:40.718574+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,2],"acting":[3,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.18","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.698096+0000","last_change":"2026-03-09T21:16:14.265529+0000","last_active":"2026-03-09T21:16:41.698096+0000","last_peered":"2026-03-09T21:16:41.698096+0000","last_clean":"2026-03-09T21:16:41.698096+0000","last_became_active":"2026-03-09T21:16:14.265424+0000","last_became_peered":"2026-03-09T21:16:14.265424+0000","last_unstale":"2026-03-09T21:16:41.698096+0000","last_undegraded":"2026-03-09T21:16:41.698096+0000","last_fullsized":"2026-03-09T21:16:41.698096+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:34:14.049193+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1e","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.702053+0000","last_change":"2026-03-09T21:16:10.209450+0000","last_active":"2026-03-09T21:16:41.702053+0000","last_peered":"2026-03-09T21:16:41.702053+0000","last_clean":"2026-03-09T21:16:41.702053+0000","last_became_active":"2026-03-09T21:16:10.209260+0000","last_became_peered":"2026-03-09T21:16:10.209260+0000","last_unstale":"2026-03-09T21:16:41.702053+0000","last_undegraded":"2026-03-09T21:16:41.702053+0000","last_fullsized":"2026-03-09T21:16:41.702053+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:51:55.928780+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1f","version":"62'11","reported_seq":48,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.225814+0000","last_change":"2026-03-09T21:16:12.245039+0000","last_active":"2026-03-09T21:16:42.225814+0000","last_peered":"2026-03-09T21:16:42.225814+0000","last_clean":"2026-03-09T21:16:42.225814+0000","last_became_active":"2026-03-09T21:16:12.244951+0000","last_became_peered":"2026-03-09T21:16:12.244951+0000","last_unstale":"2026-03-09T21:16:42.225814+0000","last_undegraded":"2026-03-09T21:16:42.225814+0000","last_fullsized":"2026-03-09T21:16:42.225814+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:04:53.656668+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,2],"acting":[0,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.19","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.696585+0000","last_change":"2026-03-09T21:16:14.291218+0000","last_active":"2026-03-09T21:16:41.696585+0000","last_peered":"2026-03-09T21:16:41.696585+0000","last_clean":"2026-03-09T21:16:41.696585+0000","last_became_active":"2026-03-09T21:16:14.291087+0000","last_became_peered":"2026-03-09T21:16:14.291087+0000","last_unstale":"2026-03-09T21:16:41.696585+0000","last_undegraded":"2026-03-09T21:16:41.696585+0000","last_fullsized":"2026-03-09T21:16:41.696585+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:59:08.923161+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,7],"acting":[1,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1a","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.697051+0000","last_change":"2026-03-09T21:16:16.272940+0000","last_active":"2026-03-09T21:16:41.697051+0000","last_peered":"2026-03-09T21:16:41.697051+0000","last_clean":"2026-03-09T21:16:41.697051+0000","last_became_active":"2026-03-09T21:16:16.272824+0000","last_became_peered":"2026-03-09T21:16:16.272824+0000","last_unstale":"2026-03-09T21:16:41.697051+0000","last_undegraded":"2026-03-09T21:16:41.697051+0000","last_fullsized":"2026-03-09T21:16:41.697051+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:42:46.466635+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,1],"acting":[4,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1d","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.323215+0000","last_change":"2026-03-09T21:16:10.230625+0000","last_active":"2026-03-09T21:16:42.323215+0000","last_peered":"2026-03-09T21:16:42.323215+0000","last_clean":"2026-03-09T21:16:42.323215+0000","last_became_active":"2026-03-09T21:16:10.222805+0000","last_became_peered":"2026-03-09T21:16:10.222805+0000","last_unstale":"2026-03-09T21:16:42.323215+0000","last_undegraded":"2026-03-09T21:16:42.323215+0000","last_fullsized":"2026-03-09T21:16:42.323215+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:17:20.278109+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,0],"acting":[7,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1c","version":"62'15","reported_seq":54,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.321899+0000","last_change":"2026-03-09T21:16:12.233737+0000","last_active":"2026-03-09T21:16:42.321899+0000","last_peered":"2026-03-09T21:16:42.321899+0000","last_clean":"2026-03-09T21:16:42.321899+0000","last_became_active":"2026-03-09T21:16:12.233527+0000","last_became_peered":"2026-03-09T21:16:12.233527+0000","last_unstale":"2026-03-09T21:16:42.321899+0000","last_undegraded":"2026-03-09T21:16:42.321899+0000","last_fullsized":"2026-03-09T21:16:42.321899+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:26:30.048144+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,1],"acting":[5,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1a","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.325623+0000","last_change":"2026-03-09T21:16:14.303232+0000","last_active":"2026-03-09T21:16:42.325623+0000","last_peered":"2026-03-09T21:16:42.325623+0000","last_clean":"2026-03-09T21:16:42.325623+0000","last_became_active":"2026-03-09T21:16:14.302938+0000","last_became_peered":"2026-03-09T21:16:14.302938+0000","last_unstale":"2026-03-09T21:16:42.325623+0000","last_undegraded":"2026-03-09T21:16:42.325623+0000","last_fullsized":"2026-03-09T21:16:42.325623+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:52:53.558981+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.19","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.321836+0000","last_change":"2026-03-09T21:16:16.270387+0000","last_active":"2026-03-09T21:16:42.321836+0000","last_peered":"2026-03-09T21:16:42.321836+0000","last_clean":"2026-03-09T21:16:42.321836+0000","last_became_active":"2026-03-09T21:16:16.270201+0000","last_became_peered":"2026-03-09T21:16:16.270201+0000","last_unstale":"2026-03-09T21:16:42.321836+0000","last_undegraded":"2026-03-09T21:16:42.321836+0000","last_fullsized":"2026-03-09T21:16:42.321836+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:34:04.006503+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,3],"acting":[5,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.1c","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.324930+0000","last_change":"2026-03-09T21:16:10.230498+0000","last_active":"2026-03-09T21:16:42.324930+0000","last_peered":"2026-03-09T21:16:42.324930+0000","last_clean":"2026-03-09T21:16:42.324930+0000","last_became_active":"2026-03-09T21:16:10.224476+0000","last_became_peered":"2026-03-09T21:16:10.224476+0000","last_unstale":"2026-03-09T21:16:42.324930+0000","last_undegraded":"2026-03-09T21:16:42.324930+0000","last_fullsized":"2026-03-09T21:16:42.324930+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:15:22.994320+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1d","version":"62'12","reported_seq":52,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.321020+0000","last_change":"2026-03-09T21:16:12.229777+0000","last_active":"2026-03-09T21:16:42.321020+0000","last_peered":"2026-03-09T21:16:42.321020+0000","last_clean":"2026-03-09T21:16:42.321020+0000","last_became_active":"2026-03-09T21:16:12.229677+0000","last_became_peered":"2026-03-09T21:16:12.229677+0000","last_unstale":"2026-03-09T21:16:42.321020+0000","last_undegraded":"2026-03-09T21:16:42.321020+0000","last_fullsized":"2026-03-09T21:16:42.321020+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:29:57.542544+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1b","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.320756+0000","last_change":"2026-03-09T21:16:14.282348+0000","last_active":"2026-03-09T21:16:42.320756+0000","last_peered":"2026-03-09T21:16:42.320756+0000","last_clean":"2026-03-09T21:16:42.320756+0000","last_became_active":"2026-03-09T21:16:14.281898+0000","last_became_peered":"2026-03-09T21:16:14.281898+0000","last_unstale":"2026-03-09T21:16:42.320756+0000","last_undegraded":"2026-03-09T21:16:42.320756+0000","last_fullsized":"2026-03-09T21:16:42.320756+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:28:23.086143+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,0,7],"acting":[5,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.18","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.225429+0000","last_change":"2026-03-09T21:16:16.277041+0000","last_active":"2026-03-09T21:16:42.225429+0000","last_peered":"2026-03-09T21:16:42.225429+0000","last_clean":"2026-03-09T21:16:42.225429+0000","last_became_active":"2026-03-09T21:16:16.275293+0000","last_became_peered":"2026-03-09T21:16:16.275293+0000","last_unstale":"2026-03-09T21:16:42.225429+0000","last_undegraded":"2026-03-09T21:16:42.225429+0000","last_fullsized":"2026-03-09T21:16:42.225429+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:19:43.875198+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,7],"acting":[0,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.a","version":"62'19","reported_seq":60,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.213814+0000","last_change":"2026-03-09T21:16:12.254801+0000","last_active":"2026-03-09T21:16:42.213814+0000","last_peered":"2026-03-09T21:16:42.213814+0000","last_clean":"2026-03-09T21:16:42.213814+0000","last_became_active":"2026-03-09T21:16:12.248116+0000","last_became_peered":"2026-03-09T21:16:12.248116+0000","last_unstale":"2026-03-09T21:16:42.213814+0000","last_undegraded":"2026-03-09T21:16:42.213814+0000","last_fullsized":"2026-03-09T21:16:42.213814+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:44:20.112832+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":9,"num_object_clones":0,"num_object_copies":27,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":9,"num_whiteouts":0,"num_read":32,"num_read_kb":21,"num_write":20,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"2.b","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.323327+0000","last_change":"2026-03-09T21:16:10.230055+0000","last_active":"2026-03-09T21:16:42.323327+0000","last_peered":"2026-03-09T21:16:42.323327+0000","last_clean":"2026-03-09T21:16:42.323327+0000","last_became_active":"2026-03-09T21:16:10.225603+0000","last_became_peered":"2026-03-09T21:16:10.225603+0000","last_unstale":"2026-03-09T21:16:42.323327+0000","last_undegraded":"2026-03-09T21:16:42.323327+0000","last_fullsized":"2026-03-09T21:16:42.323327+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:45:18.737318+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,5],"acting":[7,4,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.c","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.699826+0000","last_change":"2026-03-09T21:16:14.265321+0000","last_active":"2026-03-09T21:16:41.699826+0000","last_peered":"2026-03-09T21:16:41.699826+0000","last_clean":"2026-03-09T21:16:41.699826+0000","last_became_active":"2026-03-09T21:16:14.264421+0000","last_became_peered":"2026-03-09T21:16:14.264421+0000","last_unstale":"2026-03-09T21:16:41.699826+0000","last_undegraded":"2026-03-09T21:16:41.699826+0000","last_fullsized":"2026-03-09T21:16:41.699826+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:27:09.495584+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.f","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.320587+0000","last_change":"2026-03-09T21:16:16.274797+0000","last_active":"2026-03-09T21:16:42.320587+0000","last_peered":"2026-03-09T21:16:42.320587+0000","last_clean":"2026-03-09T21:16:42.320587+0000","last_became_active":"2026-03-09T21:16:16.274606+0000","last_became_peered":"2026-03-09T21:16:16.274606+0000","last_unstale":"2026-03-09T21:16:42.320587+0000","last_undegraded":"2026-03-09T21:16:42.320587+0000","last_fullsized":"2026-03-09T21:16:42.320587+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:36:26.687182+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,4],"acting":[2,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"3.b","version":"62'9","reported_seq":45,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.701705+0000","last_change":"2026-03-09T21:16:12.245416+0000","last_active":"2026-03-09T21:16:41.701705+0000","last_peered":"2026-03-09T21:16:41.701705+0000","last_clean":"2026-03-09T21:16:41.701705+0000","last_became_active":"2026-03-09T21:16:12.245076+0000","last_became_peered":"2026-03-09T21:16:12.245076+0000","last_unstale":"2026-03-09T21:16:41.701705+0000","last_undegraded":"2026-03-09T21:16:41.701705+0000","last_fullsized":"2026-03-09T21:16:41.701705+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:07:39.100162+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,4],"acting":[3,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.a","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.698894+0000","last_change":"2026-03-09T21:16:10.230680+0000","last_active":"2026-03-09T21:16:41.698894+0000","last_peered":"2026-03-09T21:16:41.698894+0000","last_clean":"2026-03-09T21:16:41.698894+0000","last_became_active":"2026-03-09T21:16:10.230530+0000","last_became_peered":"2026-03-09T21:16:10.230530+0000","last_unstale":"2026-03-09T21:16:41.698894+0000","last_undegraded":"2026-03-09T21:16:41.698894+0000","last_fullsized":"2026-03-09T21:16:41.698894+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:23:50.501252+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,7],"acting":[1,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.d","version":"63'11","reported_seq":51,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:17:20.293993+0000","last_change":"2026-03-09T21:16:14.274506+0000","last_active":"2026-03-09T21:17:20.293993+0000","last_peered":"2026-03-09T21:17:20.293993+0000","last_clean":"2026-03-09T21:17:20.293993+0000","last_became_active":"2026-03-09T21:16:14.273735+0000","last_became_peered":"2026-03-09T21:16:14.273735+0000","last_unstale":"2026-03-09T21:17:20.293993+0000","last_undegraded":"2026-03-09T21:17:20.293993+0000","last_fullsized":"2026-03-09T21:17:20.293993+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:24:23.354182+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,7,5],"acting":[2,7,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.e","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.698785+0000","last_change":"2026-03-09T21:16:16.272358+0000","last_active":"2026-03-09T21:16:41.698785+0000","last_peered":"2026-03-09T21:16:41.698785+0000","last_clean":"2026-03-09T21:16:41.698785+0000","last_became_active":"2026-03-09T21:16:16.272216+0000","last_became_peered":"2026-03-09T21:16:16.272216+0000","last_unstale":"2026-03-09T21:16:41.698785+0000","last_undegraded":"2026-03-09T21:16:41.698785+0000","last_fullsized":"2026-03-09T21:16:41.698785+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:59:11.843479+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.8","version":"62'15","reported_seq":54,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.701788+0000","last_change":"2026-03-09T21:16:12.237751+0000","last_active":"2026-03-09T21:16:41.701788+0000","last_peered":"2026-03-09T21:16:41.701788+0000","last_clean":"2026-03-09T21:16:41.701788+0000","last_became_active":"2026-03-09T21:16:12.237626+0000","last_became_peered":"2026-03-09T21:16:12.237626+0000","last_unstale":"2026-03-09T21:16:41.701788+0000","last_undegraded":"2026-03-09T21:16:41.701788+0000","last_fullsized":"2026-03-09T21:16:41.701788+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:32:16.399457+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.9","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.698967+0000","last_change":"2026-03-09T21:16:10.230301+0000","last_active":"2026-03-09T21:16:41.698967+0000","last_peered":"2026-03-09T21:16:41.698967+0000","last_clean":"2026-03-09T21:16:41.698967+0000","last_became_active":"2026-03-09T21:16:10.230135+0000","last_became_peered":"2026-03-09T21:16:10.230135+0000","last_unstale":"2026-03-09T21:16:41.698967+0000","last_undegraded":"2026-03-09T21:16:41.698967+0000","last_fullsized":"2026-03-09T21:16:41.698967+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:45:31.433139+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,7,3],"acting":[1,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.e","version":"63'11","reported_seq":51,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:17:20.294197+0000","last_change":"2026-03-09T21:16:14.285289+0000","last_active":"2026-03-09T21:17:20.294197+0000","last_peered":"2026-03-09T21:17:20.294197+0000","last_clean":"2026-03-09T21:17:20.294197+0000","last_became_active":"2026-03-09T21:16:14.285156+0000","last_became_peered":"2026-03-09T21:16:14.285156+0000","last_unstale":"2026-03-09T21:17:20.294197+0000","last_undegraded":"2026-03-09T21:17:20.294197+0000","last_fullsized":"2026-03-09T21:17:20.294197+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:49:46.586928+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,0],"acting":[4,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.d","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.322154+0000","last_change":"2026-03-09T21:16:16.280255+0000","last_active":"2026-03-09T21:16:42.322154+0000","last_peered":"2026-03-09T21:16:42.322154+0000","last_clean":"2026-03-09T21:16:42.322154+0000","last_became_active":"2026-03-09T21:16:16.280138+0000","last_became_peered":"2026-03-09T21:16:16.280138+0000","last_unstale":"2026-03-09T21:16:42.322154+0000","last_undegraded":"2026-03-09T21:16:42.322154+0000","last_fullsized":"2026-03-09T21:16:42.322154+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:35:49.324454+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.9","version":"62'12","reported_seq":52,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.697438+0000","last_change":"2026-03-09T21:16:12.224976+0000","last_active":"2026-03-09T21:16:41.697438+0000","last_peered":"2026-03-09T21:16:41.697438+0000","last_clean":"2026-03-09T21:16:41.697438+0000","last_became_active":"2026-03-09T21:16:12.224799+0000","last_became_peered":"2026-03-09T21:16:12.224799+0000","last_unstale":"2026-03-09T21:16:41.697438+0000","last_undegraded":"2026-03-09T21:16:41.697438+0000","last_fullsized":"2026-03-09T21:16:41.697438+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:05:31.055356+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,7],"acting":[4,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.8","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.323354+0000","last_change":"2026-03-09T21:16:10.226545+0000","last_active":"2026-03-09T21:16:42.323354+0000","last_peered":"2026-03-09T21:16:42.323354+0000","last_clean":"2026-03-09T21:16:42.323354+0000","last_became_active":"2026-03-09T21:16:10.222457+0000","last_became_peered":"2026-03-09T21:16:10.222457+0000","last_unstale":"2026-03-09T21:16:42.323354+0000","last_undegraded":"2026-03-09T21:16:42.323354+0000","last_fullsized":"2026-03-09T21:16:42.323354+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:51:28.666609+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,1],"acting":[7,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.f","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.322285+0000","last_change":"2026-03-09T21:16:14.266661+0000","last_active":"2026-03-09T21:16:42.322285+0000","last_peered":"2026-03-09T21:16:42.322285+0000","last_clean":"2026-03-09T21:16:42.322285+0000","last_became_active":"2026-03-09T21:16:14.266567+0000","last_became_peered":"2026-03-09T21:16:14.266567+0000","last_unstale":"2026-03-09T21:16:42.322285+0000","last_undegraded":"2026-03-09T21:16:42.322285+0000","last_fullsized":"2026-03-09T21:16:42.322285+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:17:48.388987+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.c","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.701075+0000","last_change":"2026-03-09T21:16:17.159081+0000","last_active":"2026-03-09T21:16:41.701075+0000","last_peered":"2026-03-09T21:16:41.701075+0000","last_clean":"2026-03-09T21:16:41.701075+0000","last_became_active":"2026-03-09T21:16:17.158937+0000","last_became_peered":"2026-03-09T21:16:17.158937+0000","last_unstale":"2026-03-09T21:16:41.701075+0000","last_undegraded":"2026-03-09T21:16:41.701075+0000","last_fullsized":"2026-03-09T21:16:41.701075+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:51:30.900661+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.6","version":"62'12","reported_seq":47,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.226414+0000","last_change":"2026-03-09T21:16:12.244509+0000","last_active":"2026-03-09T21:16:42.226414+0000","last_peered":"2026-03-09T21:16:42.226414+0000","last_clean":"2026-03-09T21:16:42.226414+0000","last_became_active":"2026-03-09T21:16:12.244220+0000","last_became_peered":"2026-03-09T21:16:12.244220+0000","last_unstale":"2026-03-09T21:16:42.226414+0000","last_undegraded":"2026-03-09T21:16:42.226414+0000","last_fullsized":"2026-03-09T21:16:42.226414+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:39:26.095125+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":18,"num_read_kb":12,"num_write":12,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.7","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.214082+0000","last_change":"2026-03-09T21:16:10.226264+0000","last_active":"2026-03-09T21:16:42.214082+0000","last_peered":"2026-03-09T21:16:42.214082+0000","last_clean":"2026-03-09T21:16:42.214082+0000","last_became_active":"2026-03-09T21:16:10.226138+0000","last_became_peered":"2026-03-09T21:16:10.226138+0000","last_unstale":"2026-03-09T21:16:42.214082+0000","last_undegraded":"2026-03-09T21:16:42.214082+0000","last_fullsized":"2026-03-09T21:16:42.214082+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:27:03.455796+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,7,2],"acting":[6,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.1","version":"62'1","reported_seq":35,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.698691+0000","last_change":"2026-03-09T21:16:19.400305+0000","last_active":"2026-03-09T21:16:41.698691+0000","last_peered":"2026-03-09T21:16:41.698691+0000","last_clean":"2026-03-09T21:16:41.698691+0000","last_became_active":"2026-03-09T21:16:13.245150+0000","last_became_peered":"2026-03-09T21:16:13.245150+0000","last_unstale":"2026-03-09T21:16:41.698691+0000","last_undegraded":"2026-03-09T21:16:41.698691+0000","last_fullsized":"2026-03-09T21:16:41.698691+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:12.191428+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:12.191428+0000","last_clean_scrub_stamp":"2026-03-09T21:16:12.191428+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:16:56.257340+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000435506,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,6],"acting":[4,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.0","version":"63'11","reported_seq":51,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:17:20.294277+0000","last_change":"2026-03-09T21:16:14.266070+0000","last_active":"2026-03-09T21:17:20.294277+0000","last_peered":"2026-03-09T21:17:20.294277+0000","last_clean":"2026-03-09T21:17:20.294277+0000","last_became_active":"2026-03-09T21:16:14.265871+0000","last_became_peered":"2026-03-09T21:16:14.265871+0000","last_unstale":"2026-03-09T21:17:20.294277+0000","last_undegraded":"2026-03-09T21:17:20.294277+0000","last_fullsized":"2026-03-09T21:17:20.294277+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:29:35.855617+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.3","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.324114+0000","last_change":"2026-03-09T21:16:17.152221+0000","last_active":"2026-03-09T21:16:42.324114+0000","last_peered":"2026-03-09T21:16:42.324114+0000","last_clean":"2026-03-09T21:16:42.324114+0000","last_became_active":"2026-03-09T21:16:17.151708+0000","last_became_peered":"2026-03-09T21:16:17.151708+0000","last_unstale":"2026-03-09T21:16:42.324114+0000","last_undegraded":"2026-03-09T21:16:42.324114+0000","last_fullsized":"2026-03-09T21:16:42.324114+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:39:06.428621+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,2],"acting":[7,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.7","version":"62'13","reported_seq":56,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.701368+0000","last_change":"2026-03-09T21:16:12.243013+0000","last_active":"2026-03-09T21:16:41.701368+0000","last_peered":"2026-03-09T21:16:41.701368+0000","last_clean":"2026-03-09T21:16:41.701368+0000","last_became_active":"2026-03-09T21:16:12.242734+0000","last_became_peered":"2026-03-09T21:16:12.242734+0000","last_unstale":"2026-03-09T21:16:41.701368+0000","last_undegraded":"2026-03-09T21:16:41.701368+0000","last_fullsized":"2026-03-09T21:16:41.701368+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":13,"log_dups_size":0,"ondisk_log_size":13,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:31:27.056422+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":30,"num_read_kb":19,"num_write":16,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.6","version":"55'1","reported_seq":34,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.699042+0000","last_change":"2026-03-09T21:16:10.219855+0000","last_active":"2026-03-09T21:16:41.699042+0000","last_peered":"2026-03-09T21:16:41.699042+0000","last_clean":"2026-03-09T21:16:41.699042+0000","last_became_active":"2026-03-09T21:16:10.219721+0000","last_became_peered":"2026-03-09T21:16:10.219721+0000","last_unstale":"2026-03-09T21:16:41.699042+0000","last_undegraded":"2026-03-09T21:16:41.699042+0000","last_fullsized":"2026-03-09T21:16:41.699042+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:50:12.205240+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,4],"acting":[1,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.0","version":"64'5","reported_seq":104,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:17:26.253663+0000","last_change":"2026-03-09T21:16:19.401140+0000","last_active":"2026-03-09T21:17:26.253663+0000","last_peered":"2026-03-09T21:17:26.253663+0000","last_clean":"2026-03-09T21:17:26.253663+0000","last_became_active":"2026-03-09T21:16:13.262951+0000","last_became_peered":"2026-03-09T21:16:13.262951+0000","last_unstale":"2026-03-09T21:17:26.253663+0000","last_undegraded":"2026-03-09T21:17:26.253663+0000","last_fullsized":"2026-03-09T21:17:26.253663+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:12.191428+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:12.191428+0000","last_clean_scrub_stamp":"2026-03-09T21:16:12.191428+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:20:38.620893+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00093184599999999995,"stat_sum":{"num_bytes":389,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":67,"num_read_kb":62,"num_write":4,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.697860+0000","last_change":"2026-03-09T21:16:14.282500+0000","last_active":"2026-03-09T21:16:41.697860+0000","last_peered":"2026-03-09T21:16:41.697860+0000","last_clean":"2026-03-09T21:16:41.697860+0000","last_became_active":"2026-03-09T21:16:14.282399+0000","last_became_peered":"2026-03-09T21:16:14.282399+0000","last_unstale":"2026-03-09T21:16:41.697860+0000","last_undegraded":"2026-03-09T21:16:41.697860+0000","last_fullsized":"2026-03-09T21:16:41.697860+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:04:01.824890+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,7],"acting":[4,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.2","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.697785+0000","last_change":"2026-03-09T21:16:16.276282+0000","last_active":"2026-03-09T21:16:41.697785+0000","last_peered":"2026-03-09T21:16:41.697785+0000","last_clean":"2026-03-09T21:16:41.697785+0000","last_became_active":"2026-03-09T21:16:16.273725+0000","last_became_peered":"2026-03-09T21:16:16.273725+0000","last_unstale":"2026-03-09T21:16:41.697785+0000","last_undegraded":"2026-03-09T21:16:41.697785+0000","last_fullsized":"2026-03-09T21:16:41.697785+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:24:00.894516+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.4","version":"63'30","reported_seq":95,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:17:20.294022+0000","last_change":"2026-03-09T21:16:12.237652+0000","last_active":"2026-03-09T21:17:20.294022+0000","last_peered":"2026-03-09T21:17:20.294022+0000","last_clean":"2026-03-09T21:17:20.294022+0000","last_became_active":"2026-03-09T21:16:12.237544+0000","last_became_peered":"2026-03-09T21:16:12.237544+0000","last_unstale":"2026-03-09T21:17:20.294022+0000","last_undegraded":"2026-03-09T21:17:20.294022+0000","last_fullsized":"2026-03-09T21:17:20.294022+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":30,"log_dups_size":0,"ondisk_log_size":30,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:13:30.396330+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":358,"num_objects":10,"num_object_clones":0,"num_object_copies":30,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":10,"num_whiteouts":0,"num_read":51,"num_read_kb":36,"num_write":26,"num_write_kb":4,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,5],"acting":[1,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.5","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.323420+0000","last_change":"2026-03-09T21:16:10.232495+0000","last_active":"2026-03-09T21:16:42.323420+0000","last_peered":"2026-03-09T21:16:42.323420+0000","last_clean":"2026-03-09T21:16:42.323420+0000","last_became_active":"2026-03-09T21:16:10.222432+0000","last_became_peered":"2026-03-09T21:16:10.222432+0000","last_unstale":"2026-03-09T21:16:42.323420+0000","last_undegraded":"2026-03-09T21:16:42.323420+0000","last_fullsized":"2026-03-09T21:16:42.323420+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:30:50.317955+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,4],"acting":[7,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.2","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.214588+0000","last_change":"2026-03-09T21:16:14.259163+0000","last_active":"2026-03-09T21:16:42.214588+0000","last_peered":"2026-03-09T21:16:42.214588+0000","last_clean":"2026-03-09T21:16:42.214588+0000","last_became_active":"2026-03-09T21:16:14.259025+0000","last_became_peered":"2026-03-09T21:16:14.259025+0000","last_unstale":"2026-03-09T21:16:42.214588+0000","last_undegraded":"2026-03-09T21:16:42.214588+0000","last_fullsized":"2026-03-09T21:16:42.214588+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:52:51.030315+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.698058+0000","last_change":"2026-03-09T21:16:17.152272+0000","last_active":"2026-03-09T21:16:41.698058+0000","last_peered":"2026-03-09T21:16:41.698058+0000","last_clean":"2026-03-09T21:16:41.698058+0000","last_became_active":"2026-03-09T21:16:17.152023+0000","last_became_peered":"2026-03-09T21:16:17.152023+0000","last_unstale":"2026-03-09T21:16:41.698058+0000","last_undegraded":"2026-03-09T21:16:41.698058+0000","last_fullsized":"2026-03-09T21:16:41.698058+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:57:09.225043+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.5","version":"62'16","reported_seq":67,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:17:20.293892+0000","last_change":"2026-03-09T21:16:12.222701+0000","last_active":"2026-03-09T21:17:20.293892+0000","last_peered":"2026-03-09T21:17:20.293892+0000","last_clean":"2026-03-09T21:17:20.293892+0000","last_became_active":"2026-03-09T21:16:12.222402+0000","last_became_peered":"2026-03-09T21:16:12.222402+0000","last_unstale":"2026-03-09T21:17:20.293892+0000","last_undegraded":"2026-03-09T21:17:20.293892+0000","last_fullsized":"2026-03-09T21:17:20.293892+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":16,"log_dups_size":0,"ondisk_log_size":16,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:07:26.664155+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":154,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":25,"num_read_kb":15,"num_write":13,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,2],"acting":[5,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.4","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.699094+0000","last_change":"2026-03-09T21:16:10.230597+0000","last_active":"2026-03-09T21:16:41.699094+0000","last_peered":"2026-03-09T21:16:41.699094+0000","last_clean":"2026-03-09T21:16:41.699094+0000","last_became_active":"2026-03-09T21:16:10.230105+0000","last_became_peered":"2026-03-09T21:16:10.230105+0000","last_unstale":"2026-03-09T21:16:41.699094+0000","last_undegraded":"2026-03-09T21:16:41.699094+0000","last_fullsized":"2026-03-09T21:16:41.699094+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:54:43.876223+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0,7],"acting":[1,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.2","version":"64'2","reported_seq":36,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.699145+0000","last_change":"2026-03-09T21:16:19.410663+0000","last_active":"2026-03-09T21:16:41.699145+0000","last_peered":"2026-03-09T21:16:41.699145+0000","last_clean":"2026-03-09T21:16:41.699145+0000","last_became_active":"2026-03-09T21:16:13.247614+0000","last_became_peered":"2026-03-09T21:16:13.247614+0000","last_unstale":"2026-03-09T21:16:41.699145+0000","last_undegraded":"2026-03-09T21:16:41.699145+0000","last_fullsized":"2026-03-09T21:16:41.699145+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:12.191428+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:12.191428+0000","last_clean_scrub_stamp":"2026-03-09T21:16:12.191428+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:32:53.049064+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.0010782089999999999,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.3","version":"63'11","reported_seq":51,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:17:20.294268+0000","last_change":"2026-03-09T21:16:14.260466+0000","last_active":"2026-03-09T21:17:20.294268+0000","last_peered":"2026-03-09T21:17:20.294268+0000","last_clean":"2026-03-09T21:17:20.294268+0000","last_became_active":"2026-03-09T21:16:14.260359+0000","last_became_peered":"2026-03-09T21:16:14.260359+0000","last_unstale":"2026-03-09T21:17:20.294268+0000","last_undegraded":"2026-03-09T21:17:20.294268+0000","last_fullsized":"2026-03-09T21:17:20.294268+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:02:23.995526+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,6,5],"acting":[0,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.0","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.225628+0000","last_change":"2026-03-09T21:16:16.275442+0000","last_active":"2026-03-09T21:16:42.225628+0000","last_peered":"2026-03-09T21:16:42.225628+0000","last_clean":"2026-03-09T21:16:42.225628+0000","last_became_active":"2026-03-09T21:16:16.270305+0000","last_became_peered":"2026-03-09T21:16:16.270305+0000","last_unstale":"2026-03-09T21:16:42.225628+0000","last_undegraded":"2026-03-09T21:16:42.225628+0000","last_fullsized":"2026-03-09T21:16:42.225628+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:27:03.391196+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,2],"acting":[0,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.3","version":"62'19","reported_seq":65,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.697593+0000","last_change":"2026-03-09T21:16:12.232513+0000","last_active":"2026-03-09T21:16:41.697593+0000","last_peered":"2026-03-09T21:16:41.697593+0000","last_clean":"2026-03-09T21:16:41.697593+0000","last_became_active":"2026-03-09T21:16:12.232399+0000","last_became_peered":"2026-03-09T21:16:12.232399+0000","last_unstale":"2026-03-09T21:16:41.697593+0000","last_undegraded":"2026-03-09T21:16:41.697593+0000","last_fullsized":"2026-03-09T21:16:41.697593+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:08:04.524835+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":39,"num_read_kb":25,"num_write":22,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,6],"acting":[4,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.2","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.320292+0000","last_change":"2026-03-09T21:16:10.217238+0000","last_active":"2026-03-09T21:16:42.320292+0000","last_peered":"2026-03-09T21:16:42.320292+0000","last_clean":"2026-03-09T21:16:42.320292+0000","last_became_active":"2026-03-09T21:16:10.216498+0000","last_became_peered":"2026-03-09T21:16:10.216498+0000","last_unstale":"2026-03-09T21:16:42.320292+0000","last_undegraded":"2026-03-09T21:16:42.320292+0000","last_fullsized":"2026-03-09T21:16:42.320292+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:49:56.184937+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,6],"acting":[5,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.5","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.225966+0000","last_change":"2026-03-09T21:16:14.267067+0000","last_active":"2026-03-09T21:16:42.225966+0000","last_peered":"2026-03-09T21:16:42.225966+0000","last_clean":"2026-03-09T21:16:42.225966+0000","last_became_active":"2026-03-09T21:16:14.266634+0000","last_became_peered":"2026-03-09T21:16:14.266634+0000","last_unstale":"2026-03-09T21:16:42.225966+0000","last_undegraded":"2026-03-09T21:16:42.225966+0000","last_fullsized":"2026-03-09T21:16:42.225966+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:21:46.006225+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.6","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.701160+0000","last_change":"2026-03-09T21:16:16.273616+0000","last_active":"2026-03-09T21:16:41.701160+0000","last_peered":"2026-03-09T21:16:41.701160+0000","last_clean":"2026-03-09T21:16:41.701160+0000","last_became_active":"2026-03-09T21:16:16.273281+0000","last_became_peered":"2026-03-09T21:16:16.273281+0000","last_unstale":"2026-03-09T21:16:41.701160+0000","last_undegraded":"2026-03-09T21:16:41.701160+0000","last_fullsized":"2026-03-09T21:16:41.701160+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:10:08.533145+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,4,7],"acting":[3,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.0","version":"62'18","reported_seq":61,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.698691+0000","last_change":"2026-03-09T21:16:12.248035+0000","last_active":"2026-03-09T21:16:41.698691+0000","last_peered":"2026-03-09T21:16:41.698691+0000","last_clean":"2026-03-09T21:16:41.698691+0000","last_became_active":"2026-03-09T21:16:12.247910+0000","last_became_peered":"2026-03-09T21:16:12.247910+0000","last_unstale":"2026-03-09T21:16:41.698691+0000","last_undegraded":"2026-03-09T21:16:41.698691+0000","last_fullsized":"2026-03-09T21:16:41.698691+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":18,"log_dups_size":0,"ondisk_log_size":18,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:36:25.689805+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":34,"num_read_kb":22,"num_write":20,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,6],"acting":[1,2,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.1","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.320448+0000","last_change":"2026-03-09T21:16:10.207522+0000","last_active":"2026-03-09T21:16:42.320448+0000","last_peered":"2026-03-09T21:16:42.320448+0000","last_clean":"2026-03-09T21:16:42.320448+0000","last_became_active":"2026-03-09T21:16:10.207168+0000","last_became_peered":"2026-03-09T21:16:10.207168+0000","last_unstale":"2026-03-09T21:16:42.320448+0000","last_undegraded":"2026-03-09T21:16:42.320448+0000","last_fullsized":"2026-03-09T21:16:42.320448+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:08:27.281888+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,0],"acting":[2,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.6","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.320833+0000","last_change":"2026-03-09T21:16:14.274889+0000","last_active":"2026-03-09T21:16:42.320833+0000","last_peered":"2026-03-09T21:16:42.320833+0000","last_clean":"2026-03-09T21:16:42.320833+0000","last_became_active":"2026-03-09T21:16:14.274365+0000","last_became_peered":"2026-03-09T21:16:14.274365+0000","last_unstale":"2026-03-09T21:16:42.320833+0000","last_undegraded":"2026-03-09T21:16:42.320833+0000","last_fullsized":"2026-03-09T21:16:42.320833+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:22:05.044533+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,7],"acting":[2,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.5","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.325440+0000","last_change":"2026-03-09T21:16:17.152082+0000","last_active":"2026-03-09T21:16:42.325440+0000","last_peered":"2026-03-09T21:16:42.325440+0000","last_clean":"2026-03-09T21:16:42.325440+0000","last_became_active":"2026-03-09T21:16:17.151462+0000","last_became_peered":"2026-03-09T21:16:17.151462+0000","last_unstale":"2026-03-09T21:16:42.325440+0000","last_undegraded":"2026-03-09T21:16:42.325440+0000","last_fullsized":"2026-03-09T21:16:42.325440+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:06:05.628662+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,3],"acting":[7,6,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1","version":"62'14","reported_seq":50,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.226364+0000","last_change":"2026-03-09T21:16:12.254210+0000","last_active":"2026-03-09T21:16:42.226364+0000","last_peered":"2026-03-09T21:16:42.226364+0000","last_clean":"2026-03-09T21:16:42.226364+0000","last_became_active":"2026-03-09T21:16:12.254069+0000","last_became_peered":"2026-03-09T21:16:12.254069+0000","last_unstale":"2026-03-09T21:16:42.226364+0000","last_undegraded":"2026-03-09T21:16:42.226364+0000","last_fullsized":"2026-03-09T21:16:42.226364+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":14,"log_dups_size":0,"ondisk_log_size":14,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:11:27.225400+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":21,"num_read_kb":14,"num_write":14,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.0","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.323299+0000","last_change":"2026-03-09T21:16:10.226636+0000","last_active":"2026-03-09T21:16:42.323299+0000","last_peered":"2026-03-09T21:16:42.323299+0000","last_clean":"2026-03-09T21:16:42.323299+0000","last_became_active":"2026-03-09T21:16:10.222626+0000","last_became_peered":"2026-03-09T21:16:10.222626+0000","last_unstale":"2026-03-09T21:16:42.323299+0000","last_undegraded":"2026-03-09T21:16:42.323299+0000","last_fullsized":"2026-03-09T21:16:42.323299+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:24:56.416965+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,1,0],"acting":[7,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.7","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.321982+0000","last_change":"2026-03-09T21:16:14.278782+0000","last_active":"2026-03-09T21:16:42.321982+0000","last_peered":"2026-03-09T21:16:42.321982+0000","last_clean":"2026-03-09T21:16:42.321982+0000","last_became_active":"2026-03-09T21:16:14.278576+0000","last_became_peered":"2026-03-09T21:16:14.278576+0000","last_unstale":"2026-03-09T21:16:42.321982+0000","last_undegraded":"2026-03-09T21:16:42.321982+0000","last_fullsized":"2026-03-09T21:16:42.321982+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:13:29.774246+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.4","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.699453+0000","last_change":"2026-03-09T21:16:16.267326+0000","last_active":"2026-03-09T21:16:41.699453+0000","last_peered":"2026-03-09T21:16:41.699453+0000","last_clean":"2026-03-09T21:16:41.699453+0000","last_became_active":"2026-03-09T21:16:16.267207+0000","last_became_peered":"2026-03-09T21:16:16.267207+0000","last_unstale":"2026-03-09T21:16:41.699453+0000","last_undegraded":"2026-03-09T21:16:41.699453+0000","last_fullsized":"2026-03-09T21:16:41.699453+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:25:14.739165+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.2","version":"62'10","reported_seq":44,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.702839+0000","last_change":"2026-03-09T21:16:12.240616+0000","last_active":"2026-03-09T21:16:41.702839+0000","last_peered":"2026-03-09T21:16:41.702839+0000","last_clean":"2026-03-09T21:16:41.702839+0000","last_became_active":"2026-03-09T21:16:12.240395+0000","last_became_peered":"2026-03-09T21:16:12.240395+0000","last_unstale":"2026-03-09T21:16:41.702839+0000","last_undegraded":"2026-03-09T21:16:41.702839+0000","last_fullsized":"2026-03-09T21:16:41.702839+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:54:16.271756+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5,6],"acting":[3,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.3","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.320912+0000","last_change":"2026-03-09T21:16:10.227723+0000","last_active":"2026-03-09T21:16:42.320912+0000","last_peered":"2026-03-09T21:16:42.320912+0000","last_clean":"2026-03-09T21:16:42.320912+0000","last_became_active":"2026-03-09T21:16:10.227546+0000","last_became_peered":"2026-03-09T21:16:10.227546+0000","last_unstale":"2026-03-09T21:16:42.320912+0000","last_undegraded":"2026-03-09T21:16:42.320912+0000","last_fullsized":"2026-03-09T21:16:42.320912+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:55:31.731415+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,2,7],"acting":[5,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"1.0","version":"66'39","reported_seq":68,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:43.784267+0000","last_change":"2026-03-09T21:15:49.169775+0000","last_active":"2026-03-09T21:16:43.784267+0000","last_peered":"2026-03-09T21:16:43.784267+0000","last_clean":"2026-03-09T21:16:43.784267+0000","last_became_active":"2026-03-09T21:15:49.162649+0000","last_became_peered":"2026-03-09T21:15:49.162649+0000","last_unstale":"2026-03-09T21:16:43.784267+0000","last_undegraded":"2026-03-09T21:16:43.784267+0000","last_fullsized":"2026-03-09T21:16:43.784267+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":19,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:12:51.208803+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:12:51.208803+0000","last_clean_scrub_stamp":"2026-03-09T21:12:51.208803+0000","objects_scrubbed":0,"log_size":39,"log_dups_size":0,"ondisk_log_size":39,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:57:59.142300+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":106,"num_read_kb":213,"num_write":69,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,6],"acting":[7,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.4","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.324850+0000","last_change":"2026-03-09T21:16:14.287882+0000","last_active":"2026-03-09T21:16:42.324850+0000","last_peered":"2026-03-09T21:16:42.324850+0000","last_clean":"2026-03-09T21:16:42.324850+0000","last_became_active":"2026-03-09T21:16:14.287549+0000","last_became_peered":"2026-03-09T21:16:14.287549+0000","last_unstale":"2026-03-09T21:16:42.324850+0000","last_undegraded":"2026-03-09T21:16:42.324850+0000","last_fullsized":"2026-03-09T21:16:42.324850+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:15:17.664068+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,5],"acting":[7,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.7","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.320841+0000","last_change":"2026-03-09T21:16:16.273012+0000","last_active":"2026-03-09T21:16:42.320841+0000","last_peered":"2026-03-09T21:16:42.320841+0000","last_clean":"2026-03-09T21:16:42.320841+0000","last_became_active":"2026-03-09T21:16:16.272927+0000","last_became_peered":"2026-03-09T21:16:16.272927+0000","last_unstale":"2026-03-09T21:16:42.320841+0000","last_undegraded":"2026-03-09T21:16:42.320841+0000","last_fullsized":"2026-03-09T21:16:42.320841+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:32:44.469965+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,4],"acting":[5,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.d","version":"62'17","reported_seq":57,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.323882+0000","last_change":"2026-03-09T21:16:12.232337+0000","last_active":"2026-03-09T21:16:42.323882+0000","last_peered":"2026-03-09T21:16:42.323882+0000","last_clean":"2026-03-09T21:16:42.323882+0000","last_became_active":"2026-03-09T21:16:12.232091+0000","last_became_peered":"2026-03-09T21:16:12.232091+0000","last_unstale":"2026-03-09T21:16:42.323882+0000","last_undegraded":"2026-03-09T21:16:42.323882+0000","last_fullsized":"2026-03-09T21:16:42.323882+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":17,"log_dups_size":0,"ondisk_log_size":17,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:29:03.818724+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":29,"num_read_kb":19,"num_write":18,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,6],"acting":[7,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.c","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.320946+0000","last_change":"2026-03-09T21:16:10.207431+0000","last_active":"2026-03-09T21:16:42.320946+0000","last_peered":"2026-03-09T21:16:42.320946+0000","last_clean":"2026-03-09T21:16:42.320946+0000","last_became_active":"2026-03-09T21:16:10.207019+0000","last_became_peered":"2026-03-09T21:16:10.207019+0000","last_unstale":"2026-03-09T21:16:42.320946+0000","last_undegraded":"2026-03-09T21:16:42.320946+0000","last_fullsized":"2026-03-09T21:16:42.320946+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:03:30.578746+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,0],"acting":[2,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.b","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.320975+0000","last_change":"2026-03-09T21:16:14.261145+0000","last_active":"2026-03-09T21:16:42.320975+0000","last_peered":"2026-03-09T21:16:42.320975+0000","last_clean":"2026-03-09T21:16:42.320975+0000","last_became_active":"2026-03-09T21:16:14.261015+0000","last_became_peered":"2026-03-09T21:16:14.261015+0000","last_unstale":"2026-03-09T21:16:42.320975+0000","last_undegraded":"2026-03-09T21:16:42.320975+0000","last_fullsized":"2026-03-09T21:16:42.320975+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:08:04.549744+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,5],"acting":[2,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.8","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.323920+0000","last_change":"2026-03-09T21:16:16.267032+0000","last_active":"2026-03-09T21:16:42.323920+0000","last_peered":"2026-03-09T21:16:42.323920+0000","last_clean":"2026-03-09T21:16:42.323920+0000","last_became_active":"2026-03-09T21:16:16.266912+0000","last_became_peered":"2026-03-09T21:16:16.266912+0000","last_unstale":"2026-03-09T21:16:42.323920+0000","last_undegraded":"2026-03-09T21:16:42.323920+0000","last_fullsized":"2026-03-09T21:16:42.323920+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:48:18.921950+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,3],"acting":[7,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.c","version":"62'10","reported_seq":44,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.321738+0000","last_change":"2026-03-09T21:16:12.238570+0000","last_active":"2026-03-09T21:16:42.321738+0000","last_peered":"2026-03-09T21:16:42.321738+0000","last_clean":"2026-03-09T21:16:42.321738+0000","last_became_active":"2026-03-09T21:16:12.238332+0000","last_became_peered":"2026-03-09T21:16:12.238332+0000","last_unstale":"2026-03-09T21:16:42.321738+0000","last_undegraded":"2026-03-09T21:16:42.321738+0000","last_fullsized":"2026-03-09T21:16:42.321738+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:01:27.938454+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,6],"acting":[5,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.d","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.699548+0000","last_change":"2026-03-09T21:16:10.216661+0000","last_active":"2026-03-09T21:16:41.699548+0000","last_peered":"2026-03-09T21:16:41.699548+0000","last_clean":"2026-03-09T21:16:41.699548+0000","last_became_active":"2026-03-09T21:16:10.216548+0000","last_became_peered":"2026-03-09T21:16:10.216548+0000","last_unstale":"2026-03-09T21:16:41.699548+0000","last_undegraded":"2026-03-09T21:16:41.699548+0000","last_fullsized":"2026-03-09T21:16:41.699548+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:32:06.723542+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,3],"acting":[1,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.a","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.321230+0000","last_change":"2026-03-09T21:16:14.264873+0000","last_active":"2026-03-09T21:16:42.321230+0000","last_peered":"2026-03-09T21:16:42.321230+0000","last_clean":"2026-03-09T21:16:42.321230+0000","last_became_active":"2026-03-09T21:16:14.264645+0000","last_became_peered":"2026-03-09T21:16:14.264645+0000","last_unstale":"2026-03-09T21:16:42.321230+0000","last_undegraded":"2026-03-09T21:16:42.321230+0000","last_fullsized":"2026-03-09T21:16:42.321230+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:26:37.126300+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,3],"acting":[2,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.9","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.225333+0000","last_change":"2026-03-09T21:16:16.275373+0000","last_active":"2026-03-09T21:16:42.225333+0000","last_peered":"2026-03-09T21:16:42.225333+0000","last_clean":"2026-03-09T21:16:42.225333+0000","last_became_active":"2026-03-09T21:16:16.270182+0000","last_became_peered":"2026-03-09T21:16:16.270182+0000","last_unstale":"2026-03-09T21:16:42.225333+0000","last_undegraded":"2026-03-09T21:16:42.225333+0000","last_fullsized":"2026-03-09T21:16:42.225333+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:12:21.691688+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.f","version":"62'15","reported_seq":54,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.324449+0000","last_change":"2026-03-09T21:16:12.245451+0000","last_active":"2026-03-09T21:16:42.324449+0000","last_peered":"2026-03-09T21:16:42.324449+0000","last_clean":"2026-03-09T21:16:42.324449+0000","last_became_active":"2026-03-09T21:16:12.245342+0000","last_became_peered":"2026-03-09T21:16:12.245342+0000","last_unstale":"2026-03-09T21:16:42.324449+0000","last_undegraded":"2026-03-09T21:16:42.324449+0000","last_fullsized":"2026-03-09T21:16:42.324449+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:50:26.299246+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,0],"acting":[7,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.e","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.320494+0000","last_change":"2026-03-09T21:16:10.223307+0000","last_active":"2026-03-09T21:16:42.320494+0000","last_peered":"2026-03-09T21:16:42.320494+0000","last_clean":"2026-03-09T21:16:42.320494+0000","last_became_active":"2026-03-09T21:16:10.223100+0000","last_became_peered":"2026-03-09T21:16:10.223100+0000","last_unstale":"2026-03-09T21:16:42.320494+0000","last_undegraded":"2026-03-09T21:16:42.320494+0000","last_fullsized":"2026-03-09T21:16:42.320494+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:31:09.474052+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,7],"acting":[2,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.9","version":"63'11","reported_seq":51,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:17:20.294280+0000","last_change":"2026-03-09T21:16:14.287852+0000","last_active":"2026-03-09T21:17:20.294280+0000","last_peered":"2026-03-09T21:17:20.294280+0000","last_clean":"2026-03-09T21:17:20.294280+0000","last_became_active":"2026-03-09T21:16:14.287754+0000","last_became_peered":"2026-03-09T21:16:14.287754+0000","last_unstale":"2026-03-09T21:17:20.294280+0000","last_undegraded":"2026-03-09T21:17:20.294280+0000","last_fullsized":"2026-03-09T21:17:20.294280+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:23:06.291274+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.a","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.320647+0000","last_change":"2026-03-09T21:16:17.158049+0000","last_active":"2026-03-09T21:16:42.320647+0000","last_peered":"2026-03-09T21:16:42.320647+0000","last_clean":"2026-03-09T21:16:42.320647+0000","last_became_active":"2026-03-09T21:16:17.157829+0000","last_became_peered":"2026-03-09T21:16:17.157829+0000","last_unstale":"2026-03-09T21:16:42.320647+0000","last_undegraded":"2026-03-09T21:16:42.320647+0000","last_fullsized":"2026-03-09T21:16:42.320647+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:48:32.647715+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,0],"acting":[5,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.e","version":"62'11","reported_seq":48,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.325023+0000","last_change":"2026-03-09T21:16:12.235713+0000","last_active":"2026-03-09T21:16:42.325023+0000","last_peered":"2026-03-09T21:16:42.325023+0000","last_clean":"2026-03-09T21:16:42.325023+0000","last_became_active":"2026-03-09T21:16:12.235577+0000","last_became_peered":"2026-03-09T21:16:12.235577+0000","last_unstale":"2026-03-09T21:16:42.325023+0000","last_undegraded":"2026-03-09T21:16:42.325023+0000","last_fullsized":"2026-03-09T21:16:42.325023+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:43:55.885246+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.f","version":"55'2","reported_seq":49,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.698367+0000","last_change":"2026-03-09T21:16:10.223447+0000","last_active":"2026-03-09T21:16:41.698367+0000","last_peered":"2026-03-09T21:16:41.698367+0000","last_clean":"2026-03-09T21:16:41.698367+0000","last_became_active":"2026-03-09T21:16:10.223290+0000","last_became_peered":"2026-03-09T21:16:10.223290+0000","last_unstale":"2026-03-09T21:16:41.698367+0000","last_undegraded":"2026-03-09T21:16:41.698367+0000","last_fullsized":"2026-03-09T21:16:41.698367+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:13:40.060786+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":92,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":14,"num_read_kb":14,"num_write":4,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,7],"acting":[4,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.8","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.321153+0000","last_change":"2026-03-09T21:16:14.262654+0000","last_active":"2026-03-09T21:16:42.321153+0000","last_peered":"2026-03-09T21:16:42.321153+0000","last_clean":"2026-03-09T21:16:42.321153+0000","last_became_active":"2026-03-09T21:16:14.262264+0000","last_became_peered":"2026-03-09T21:16:14.262264+0000","last_unstale":"2026-03-09T21:16:42.321153+0000","last_undegraded":"2026-03-09T21:16:42.321153+0000","last_fullsized":"2026-03-09T21:16:42.321153+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:05:49.554558+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,1],"acting":[2,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.b","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.701119+0000","last_change":"2026-03-09T21:16:16.266710+0000","last_active":"2026-03-09T21:16:41.701119+0000","last_peered":"2026-03-09T21:16:41.701119+0000","last_clean":"2026-03-09T21:16:41.701119+0000","last_became_active":"2026-03-09T21:16:16.265277+0000","last_became_peered":"2026-03-09T21:16:16.265277+0000","last_unstale":"2026-03-09T21:16:41.701119+0000","last_undegraded":"2026-03-09T21:16:41.701119+0000","last_fullsized":"2026-03-09T21:16:41.701119+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:49:53.118319+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.11","version":"62'11","reported_seq":48,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.323799+0000","last_change":"2026-03-09T21:16:12.232405+0000","last_active":"2026-03-09T21:16:42.323799+0000","last_peered":"2026-03-09T21:16:42.323799+0000","last_clean":"2026-03-09T21:16:42.323799+0000","last_became_active":"2026-03-09T21:16:12.232253+0000","last_became_peered":"2026-03-09T21:16:12.232253+0000","last_unstale":"2026-03-09T21:16:42.323799+0000","last_undegraded":"2026-03-09T21:16:42.323799+0000","last_fullsized":"2026-03-09T21:16:42.323799+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:07:26.360266+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.10","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.320343+0000","last_change":"2026-03-09T21:16:10.207596+0000","last_active":"2026-03-09T21:16:42.320343+0000","last_peered":"2026-03-09T21:16:42.320343+0000","last_clean":"2026-03-09T21:16:42.320343+0000","last_became_active":"2026-03-09T21:16:10.207326+0000","last_became_peered":"2026-03-09T21:16:10.207326+0000","last_unstale":"2026-03-09T21:16:42.320343+0000","last_undegraded":"2026-03-09T21:16:42.320343+0000","last_fullsized":"2026-03-09T21:16:42.320343+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:32:58.852705+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1,0],"acting":[2,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.17","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.707640+0000","last_change":"2026-03-09T21:16:14.275288+0000","last_active":"2026-03-09T21:16:41.707640+0000","last_peered":"2026-03-09T21:16:41.707640+0000","last_clean":"2026-03-09T21:16:41.707640+0000","last_became_active":"2026-03-09T21:16:14.272967+0000","last_became_peered":"2026-03-09T21:16:14.272967+0000","last_unstale":"2026-03-09T21:16:41.707640+0000","last_undegraded":"2026-03-09T21:16:41.707640+0000","last_fullsized":"2026-03-09T21:16:41.707640+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:25:29.357834+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.14","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.320890+0000","last_change":"2026-03-09T21:16:16.274693+0000","last_active":"2026-03-09T21:16:42.320890+0000","last_peered":"2026-03-09T21:16:42.320890+0000","last_clean":"2026-03-09T21:16:42.320890+0000","last_became_active":"2026-03-09T21:16:16.274491+0000","last_became_peered":"2026-03-09T21:16:16.274491+0000","last_unstale":"2026-03-09T21:16:42.320890+0000","last_undegraded":"2026-03-09T21:16:42.320890+0000","last_fullsized":"2026-03-09T21:16:42.320890+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:20:32.759919+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,7],"acting":[2,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"3.10","version":"62'4","reported_seq":35,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.213977+0000","last_change":"2026-03-09T21:16:12.248075+0000","last_active":"2026-03-09T21:16:42.213977+0000","last_peered":"2026-03-09T21:16:42.213977+0000","last_clean":"2026-03-09T21:16:42.213977+0000","last_became_active":"2026-03-09T21:16:12.241397+0000","last_became_peered":"2026-03-09T21:16:12.241397+0000","last_unstale":"2026-03-09T21:16:42.213977+0000","last_undegraded":"2026-03-09T21:16:42.213977+0000","last_fullsized":"2026-03-09T21:16:42.213977+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":4,"log_dups_size":0,"ondisk_log_size":4,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:35:28.009360+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":6,"num_read_kb":4,"num_write":4,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"2.11","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.213966+0000","last_change":"2026-03-09T21:16:10.218156+0000","last_active":"2026-03-09T21:16:42.213966+0000","last_peered":"2026-03-09T21:16:42.213966+0000","last_clean":"2026-03-09T21:16:42.213966+0000","last_became_active":"2026-03-09T21:16:10.217929+0000","last_became_peered":"2026-03-09T21:16:10.217929+0000","last_unstale":"2026-03-09T21:16:42.213966+0000","last_undegraded":"2026-03-09T21:16:42.213966+0000","last_fullsized":"2026-03-09T21:16:42.213966+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:30:32.611575+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.16","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.322042+0000","last_change":"2026-03-09T21:16:14.283836+0000","last_active":"2026-03-09T21:16:42.322042+0000","last_peered":"2026-03-09T21:16:42.322042+0000","last_clean":"2026-03-09T21:16:42.322042+0000","last_became_active":"2026-03-09T21:16:14.268301+0000","last_became_peered":"2026-03-09T21:16:14.268301+0000","last_unstale":"2026-03-09T21:16:42.322042+0000","last_undegraded":"2026-03-09T21:16:42.322042+0000","last_fullsized":"2026-03-09T21:16:42.322042+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:05:15.370994+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,1],"acting":[5,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.15","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.325512+0000","last_change":"2026-03-09T21:16:17.154283+0000","last_active":"2026-03-09T21:16:42.325512+0000","last_peered":"2026-03-09T21:16:42.325512+0000","last_clean":"2026-03-09T21:16:42.325512+0000","last_became_active":"2026-03-09T21:16:17.154172+0000","last_became_peered":"2026-03-09T21:16:17.154172+0000","last_unstale":"2026-03-09T21:16:42.325512+0000","last_undegraded":"2026-03-09T21:16:42.325512+0000","last_fullsized":"2026-03-09T21:16:42.325512+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:28:07.199857+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.13","version":"62'11","reported_seq":48,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.324230+0000","last_change":"2026-03-09T21:16:12.224697+0000","last_active":"2026-03-09T21:16:42.324230+0000","last_peered":"2026-03-09T21:16:42.324230+0000","last_clean":"2026-03-09T21:16:42.324230+0000","last_became_active":"2026-03-09T21:16:12.224433+0000","last_became_peered":"2026-03-09T21:16:12.224433+0000","last_unstale":"2026-03-09T21:16:42.324230+0000","last_undegraded":"2026-03-09T21:16:42.324230+0000","last_fullsized":"2026-03-09T21:16:42.324230+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:06:15.953602+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,2],"acting":[7,4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.12","version":"55'1","reported_seq":41,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.320427+0000","last_change":"2026-03-09T21:16:10.227999+0000","last_active":"2026-03-09T21:16:42.320427+0000","last_peered":"2026-03-09T21:16:42.320427+0000","last_clean":"2026-03-09T21:16:42.320427+0000","last_became_active":"2026-03-09T21:16:10.227846+0000","last_became_peered":"2026-03-09T21:16:10.227846+0000","last_unstale":"2026-03-09T21:16:42.320427+0000","last_undegraded":"2026-03-09T21:16:42.320427+0000","last_fullsized":"2026-03-09T21:16:42.320427+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:03:25.824965+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":436,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,7],"acting":[5,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.15","version":"63'11","reported_seq":54,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:17:20.293741+0000","last_change":"2026-03-09T21:16:14.278691+0000","last_active":"2026-03-09T21:17:20.293741+0000","last_peered":"2026-03-09T21:17:20.293741+0000","last_clean":"2026-03-09T21:17:20.293741+0000","last_became_active":"2026-03-09T21:16:14.278362+0000","last_became_peered":"2026-03-09T21:16:14.278362+0000","last_unstale":"2026-03-09T21:17:20.293741+0000","last_undegraded":"2026-03-09T21:17:20.293741+0000","last_fullsized":"2026-03-09T21:17:20.293741+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:12:14.733604+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.16","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.225497+0000","last_change":"2026-03-09T21:16:16.276544+0000","last_active":"2026-03-09T21:16:42.225497+0000","last_peered":"2026-03-09T21:16:42.225497+0000","last_clean":"2026-03-09T21:16:42.225497+0000","last_became_active":"2026-03-09T21:16:16.276443+0000","last_became_peered":"2026-03-09T21:16:16.276443+0000","last_unstale":"2026-03-09T21:16:42.225497+0000","last_undegraded":"2026-03-09T21:16:42.225497+0000","last_fullsized":"2026-03-09T21:16:42.225497+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:21:56.957892+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.12","version":"62'9","reported_seq":45,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.226036+0000","last_change":"2026-03-09T21:16:12.253451+0000","last_active":"2026-03-09T21:16:42.226036+0000","last_peered":"2026-03-09T21:16:42.226036+0000","last_clean":"2026-03-09T21:16:42.226036+0000","last_became_active":"2026-03-09T21:16:12.252676+0000","last_became_peered":"2026-03-09T21:16:12.252676+0000","last_unstale":"2026-03-09T21:16:42.226036+0000","last_undegraded":"2026-03-09T21:16:42.226036+0000","last_fullsized":"2026-03-09T21:16:42.226036+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:10:28.599615+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.13","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.226787+0000","last_change":"2026-03-09T21:16:10.216983+0000","last_active":"2026-03-09T21:16:42.226787+0000","last_peered":"2026-03-09T21:16:42.226787+0000","last_clean":"2026-03-09T21:16:42.226787+0000","last_became_active":"2026-03-09T21:16:10.216656+0000","last_became_peered":"2026-03-09T21:16:10.216656+0000","last_unstale":"2026-03-09T21:16:42.226787+0000","last_undegraded":"2026-03-09T21:16:42.226787+0000","last_fullsized":"2026-03-09T21:16:42.226787+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:34:53.806330+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.14","version":"63'11","reported_seq":51,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:17:20.294352+0000","last_change":"2026-03-09T21:16:14.277022+0000","last_active":"2026-03-09T21:17:20.294352+0000","last_peered":"2026-03-09T21:17:20.294352+0000","last_clean":"2026-03-09T21:17:20.294352+0000","last_became_active":"2026-03-09T21:16:14.273186+0000","last_became_peered":"2026-03-09T21:16:14.273186+0000","last_unstale":"2026-03-09T21:17:20.294352+0000","last_undegraded":"2026-03-09T21:17:20.294352+0000","last_fullsized":"2026-03-09T21:17:20.294352+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:52:00.481587+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,2],"acting":[3,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.17","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.697711+0000","last_change":"2026-03-09T21:16:16.276694+0000","last_active":"2026-03-09T21:16:41.697711+0000","last_peered":"2026-03-09T21:16:41.697711+0000","last_clean":"2026-03-09T21:16:41.697711+0000","last_became_active":"2026-03-09T21:16:16.275869+0000","last_became_peered":"2026-03-09T21:16:16.275869+0000","last_unstale":"2026-03-09T21:16:41.697711+0000","last_undegraded":"2026-03-09T21:16:41.697711+0000","last_fullsized":"2026-03-09T21:16:41.697711+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:23:27.691544+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,5],"acting":[4,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.15","version":"62'9","reported_seq":45,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.323787+0000","last_change":"2026-03-09T21:16:12.225161+0000","last_active":"2026-03-09T21:16:42.323787+0000","last_peered":"2026-03-09T21:16:42.323787+0000","last_clean":"2026-03-09T21:16:42.323787+0000","last_became_active":"2026-03-09T21:16:12.224534+0000","last_became_peered":"2026-03-09T21:16:12.224534+0000","last_unstale":"2026-03-09T21:16:42.323787+0000","last_undegraded":"2026-03-09T21:16:42.323787+0000","last_fullsized":"2026-03-09T21:16:42.323787+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:04:38.541569+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,3,4],"acting":[7,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.14","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.213565+0000","last_change":"2026-03-09T21:16:10.212267+0000","last_active":"2026-03-09T21:16:42.213565+0000","last_peered":"2026-03-09T21:16:42.213565+0000","last_clean":"2026-03-09T21:16:42.213565+0000","last_became_active":"2026-03-09T21:16:10.211916+0000","last_became_peered":"2026-03-09T21:16:10.211916+0000","last_unstale":"2026-03-09T21:16:42.213565+0000","last_undegraded":"2026-03-09T21:16:42.213565+0000","last_fullsized":"2026-03-09T21:16:42.213565+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:05:54.459284+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,3,5],"acting":[6,3,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.13","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.701642+0000","last_change":"2026-03-09T21:16:14.263603+0000","last_active":"2026-03-09T21:16:41.701642+0000","last_peered":"2026-03-09T21:16:41.701642+0000","last_clean":"2026-03-09T21:16:41.701642+0000","last_became_active":"2026-03-09T21:16:14.263413+0000","last_became_peered":"2026-03-09T21:16:14.263413+0000","last_unstale":"2026-03-09T21:16:41.701642+0000","last_undegraded":"2026-03-09T21:16:41.701642+0000","last_fullsized":"2026-03-09T21:16:41.701642+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:31:16.356656+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.10","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.225571+0000","last_change":"2026-03-09T21:16:16.279437+0000","last_active":"2026-03-09T21:16:42.225571+0000","last_peered":"2026-03-09T21:16:42.225571+0000","last_clean":"2026-03-09T21:16:42.225571+0000","last_became_active":"2026-03-09T21:16:16.279348+0000","last_became_peered":"2026-03-09T21:16:16.279348+0000","last_unstale":"2026-03-09T21:16:42.225571+0000","last_undegraded":"2026-03-09T21:16:42.225571+0000","last_fullsized":"2026-03-09T21:16:42.225571+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:34:06.442063+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,1],"acting":[0,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.14","version":"62'10","reported_seq":44,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.697249+0000","last_change":"2026-03-09T21:16:12.239617+0000","last_active":"2026-03-09T21:16:41.697249+0000","last_peered":"2026-03-09T21:16:41.697249+0000","last_clean":"2026-03-09T21:16:41.697249+0000","last_became_active":"2026-03-09T21:16:12.239416+0000","last_became_peered":"2026-03-09T21:16:12.239416+0000","last_unstale":"2026-03-09T21:16:41.697249+0000","last_undegraded":"2026-03-09T21:16:41.697249+0000","last_fullsized":"2026-03-09T21:16:41.697249+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:56:20.827013+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,7,6],"acting":[4,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.15","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.699313+0000","last_change":"2026-03-09T21:16:10.210770+0000","last_active":"2026-03-09T21:16:41.699313+0000","last_peered":"2026-03-09T21:16:41.699313+0000","last_clean":"2026-03-09T21:16:41.699313+0000","last_became_active":"2026-03-09T21:16:10.210687+0000","last_became_peered":"2026-03-09T21:16:10.210687+0000","last_unstale":"2026-03-09T21:16:41.699313+0000","last_undegraded":"2026-03-09T21:16:41.699313+0000","last_fullsized":"2026-03-09T21:16:41.699313+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:36:49.951697+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,0],"acting":[1,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.12","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.699270+0000","last_change":"2026-03-09T21:16:14.269683+0000","last_active":"2026-03-09T21:16:41.699270+0000","last_peered":"2026-03-09T21:16:41.699270+0000","last_clean":"2026-03-09T21:16:41.699270+0000","last_became_active":"2026-03-09T21:16:14.269254+0000","last_became_peered":"2026-03-09T21:16:14.269254+0000","last_unstale":"2026-03-09T21:16:41.699270+0000","last_undegraded":"2026-03-09T21:16:41.699270+0000","last_fullsized":"2026-03-09T21:16:41.699270+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:39:25.534484+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.11","version":"62'1","reported_seq":22,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.701205+0000","last_change":"2026-03-09T21:16:16.278021+0000","last_active":"2026-03-09T21:16:41.701205+0000","last_peered":"2026-03-09T21:16:41.701205+0000","last_clean":"2026-03-09T21:16:41.701205+0000","last_became_active":"2026-03-09T21:16:16.277930+0000","last_became_peered":"2026-03-09T21:16:16.277930+0000","last_unstale":"2026-03-09T21:16:41.701205+0000","last_undegraded":"2026-03-09T21:16:41.701205+0000","last_fullsized":"2026-03-09T21:16:41.701205+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:26:15.622259+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":13,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.17","version":"62'6","reported_seq":38,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.226466+0000","last_change":"2026-03-09T21:16:12.253857+0000","last_active":"2026-03-09T21:16:42.226466+0000","last_peered":"2026-03-09T21:16:42.226466+0000","last_clean":"2026-03-09T21:16:42.226466+0000","last_became_active":"2026-03-09T21:16:12.253608+0000","last_became_peered":"2026-03-09T21:16:12.253608+0000","last_unstale":"2026-03-09T21:16:42.226466+0000","last_undegraded":"2026-03-09T21:16:42.226466+0000","last_fullsized":"2026-03-09T21:16:42.226466+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":6,"log_dups_size":0,"ondisk_log_size":6,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:55:50.922798+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":9,"num_read_kb":6,"num_write":6,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,3],"acting":[0,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.16","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.321426+0000","last_change":"2026-03-09T21:16:10.211010+0000","last_active":"2026-03-09T21:16:42.321426+0000","last_peered":"2026-03-09T21:16:42.321426+0000","last_clean":"2026-03-09T21:16:42.321426+0000","last_became_active":"2026-03-09T21:16:10.210768+0000","last_became_peered":"2026-03-09T21:16:10.210768+0000","last_unstale":"2026-03-09T21:16:42.321426+0000","last_undegraded":"2026-03-09T21:16:42.321426+0000","last_fullsized":"2026-03-09T21:16:42.321426+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:14:45.154745+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,2],"acting":[5,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.11","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.213283+0000","last_change":"2026-03-09T21:16:14.279508+0000","last_active":"2026-03-09T21:16:42.213283+0000","last_peered":"2026-03-09T21:16:42.213283+0000","last_clean":"2026-03-09T21:16:42.213283+0000","last_became_active":"2026-03-09T21:16:14.279292+0000","last_became_peered":"2026-03-09T21:16:14.279292+0000","last_unstale":"2026-03-09T21:16:42.213283+0000","last_undegraded":"2026-03-09T21:16:42.213283+0000","last_fullsized":"2026-03-09T21:16:42.213283+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:56:03.989342+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.12","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.325327+0000","last_change":"2026-03-09T21:16:16.275100+0000","last_active":"2026-03-09T21:16:42.325327+0000","last_peered":"2026-03-09T21:16:42.325327+0000","last_clean":"2026-03-09T21:16:42.325327+0000","last_became_active":"2026-03-09T21:16:16.274971+0000","last_became_peered":"2026-03-09T21:16:16.274971+0000","last_unstale":"2026-03-09T21:16:42.325327+0000","last_undegraded":"2026-03-09T21:16:42.325327+0000","last_fullsized":"2026-03-09T21:16:42.325327+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:49:30.051853+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,4],"acting":[7,2,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.16","version":"62'9","reported_seq":45,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.321011+0000","last_change":"2026-03-09T21:16:12.233802+0000","last_active":"2026-03-09T21:16:42.321011+0000","last_peered":"2026-03-09T21:16:42.321011+0000","last_clean":"2026-03-09T21:16:42.321011+0000","last_became_active":"2026-03-09T21:16:12.233655+0000","last_became_peered":"2026-03-09T21:16:12.233655+0000","last_unstale":"2026-03-09T21:16:42.321011+0000","last_undegraded":"2026-03-09T21:16:42.321011+0000","last_fullsized":"2026-03-09T21:16:42.321011+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:32:54.151455+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,1],"acting":[5,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.17","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.213900+0000","last_change":"2026-03-09T21:16:10.206014+0000","last_active":"2026-03-09T21:16:42.213900+0000","last_peered":"2026-03-09T21:16:42.213900+0000","last_clean":"2026-03-09T21:16:42.213900+0000","last_became_active":"2026-03-09T21:16:10.205791+0000","last_became_peered":"2026-03-09T21:16:10.205791+0000","last_unstale":"2026-03-09T21:16:42.213900+0000","last_undegraded":"2026-03-09T21:16:42.213900+0000","last_fullsized":"2026-03-09T21:16:42.213900+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:29:29.995942+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,5,2],"acting":[6,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.10","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.325202+0000","last_change":"2026-03-09T21:16:14.287745+0000","last_active":"2026-03-09T21:16:42.325202+0000","last_peered":"2026-03-09T21:16:42.325202+0000","last_clean":"2026-03-09T21:16:42.325202+0000","last_became_active":"2026-03-09T21:16:14.287561+0000","last_became_peered":"2026-03-09T21:16:14.287561+0000","last_unstale":"2026-03-09T21:16:42.325202+0000","last_undegraded":"2026-03-09T21:16:42.325202+0000","last_fullsized":"2026-03-09T21:16:42.325202+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:08:43.548598+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.13","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.701024+0000","last_change":"2026-03-09T21:16:17.156625+0000","last_active":"2026-03-09T21:16:41.701024+0000","last_peered":"2026-03-09T21:16:41.701024+0000","last_clean":"2026-03-09T21:16:41.701024+0000","last_became_active":"2026-03-09T21:16:17.156465+0000","last_became_peered":"2026-03-09T21:16:17.156465+0000","last_unstale":"2026-03-09T21:16:41.701024+0000","last_undegraded":"2026-03-09T21:16:41.701024+0000","last_fullsized":"2026-03-09T21:16:41.701024+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:37:43.243071+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,6],"acting":[3,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.1c","version":"62'1","reported_seq":23,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.324392+0000","last_change":"2026-03-09T21:16:16.264098+0000","last_active":"2026-03-09T21:16:42.324392+0000","last_peered":"2026-03-09T21:16:42.324392+0000","last_clean":"2026-03-09T21:16:42.324392+0000","last_became_active":"2026-03-09T21:16:16.263765+0000","last_became_peered":"2026-03-09T21:16:16.263765+0000","last_unstale":"2026-03-09T21:16:42.324392+0000","last_undegraded":"2026-03-09T21:16:42.324392+0000","last_fullsized":"2026-03-09T21:16:42.324392+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:30:24.005470+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":403,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":3,"num_read_kb":3,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.19","version":"62'15","reported_seq":54,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.699590+0000","last_change":"2026-03-09T21:16:12.241180+0000","last_active":"2026-03-09T21:16:41.699590+0000","last_peered":"2026-03-09T21:16:41.699590+0000","last_clean":"2026-03-09T21:16:41.699590+0000","last_became_active":"2026-03-09T21:16:12.241057+0000","last_became_peered":"2026-03-09T21:16:12.241057+0000","last_unstale":"2026-03-09T21:16:41.699590+0000","last_undegraded":"2026-03-09T21:16:41.699590+0000","last_fullsized":"2026-03-09T21:16:41.699590+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:09:07.125323+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.18","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.320378+0000","last_change":"2026-03-09T21:16:10.228232+0000","last_active":"2026-03-09T21:16:42.320378+0000","last_peered":"2026-03-09T21:16:42.320378+0000","last_clean":"2026-03-09T21:16:42.320378+0000","last_became_active":"2026-03-09T21:16:10.227447+0000","last_became_peered":"2026-03-09T21:16:10.227447+0000","last_unstale":"2026-03-09T21:16:42.320378+0000","last_undegraded":"2026-03-09T21:16:42.320378+0000","last_fullsized":"2026-03-09T21:16:42.320378+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:01:56.759266+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,7],"acting":[5,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1f","version":"63'11","reported_seq":54,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:17:20.293741+0000","last_change":"2026-03-09T21:16:14.274489+0000","last_active":"2026-03-09T21:17:20.293741+0000","last_peered":"2026-03-09T21:17:20.293741+0000","last_clean":"2026-03-09T21:17:20.293741+0000","last_became_active":"2026-03-09T21:16:14.273294+0000","last_became_peered":"2026-03-09T21:16:14.273294+0000","last_unstale":"2026-03-09T21:17:20.293741+0000","last_undegraded":"2026-03-09T21:17:20.293741+0000","last_fullsized":"2026-03-09T21:17:20.293741+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:59:31.884899+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1d","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.705100+0000","last_change":"2026-03-09T21:16:16.272426+0000","last_active":"2026-03-09T21:16:41.705100+0000","last_peered":"2026-03-09T21:16:41.705100+0000","last_clean":"2026-03-09T21:16:41.705100+0000","last_became_active":"2026-03-09T21:16:16.272161+0000","last_became_peered":"2026-03-09T21:16:16.272161+0000","last_unstale":"2026-03-09T21:16:41.705100+0000","last_undegraded":"2026-03-09T21:16:41.705100+0000","last_fullsized":"2026-03-09T21:16:41.705100+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:35:40.987488+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.18","version":"62'9","reported_seq":45,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.707217+0000","last_change":"2026-03-09T21:16:12.242926+0000","last_active":"2026-03-09T21:16:41.707217+0000","last_peered":"2026-03-09T21:16:41.707217+0000","last_clean":"2026-03-09T21:16:41.707217+0000","last_became_active":"2026-03-09T21:16:12.241568+0000","last_became_peered":"2026-03-09T21:16:12.241568+0000","last_unstale":"2026-03-09T21:16:41.707217+0000","last_undegraded":"2026-03-09T21:16:41.707217+0000","last_fullsized":"2026-03-09T21:16:41.707217+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:43:13.793074+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.19","version":"55'1","reported_seq":34,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.707157+0000","last_change":"2026-03-09T21:16:10.209692+0000","last_active":"2026-03-09T21:16:41.707157+0000","last_peered":"2026-03-09T21:16:41.707157+0000","last_clean":"2026-03-09T21:16:41.707157+0000","last_became_active":"2026-03-09T21:16:10.209312+0000","last_became_peered":"2026-03-09T21:16:10.209312+0000","last_unstale":"2026-03-09T21:16:41.707157+0000","last_undegraded":"2026-03-09T21:16:41.707157+0000","last_fullsized":"2026-03-09T21:16:41.707157+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:17:44.302980+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,0],"acting":[3,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1e","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.226079+0000","last_change":"2026-03-09T21:16:14.277807+0000","last_active":"2026-03-09T21:16:42.226079+0000","last_peered":"2026-03-09T21:16:42.226079+0000","last_clean":"2026-03-09T21:16:42.226079+0000","last_became_active":"2026-03-09T21:16:14.277483+0000","last_became_peered":"2026-03-09T21:16:14.277483+0000","last_unstale":"2026-03-09T21:16:42.226079+0000","last_undegraded":"2026-03-09T21:16:42.226079+0000","last_fullsized":"2026-03-09T21:16:42.226079+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:39:34.949383+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.1e","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.698055+0000","last_change":"2026-03-09T21:16:17.156266+0000","last_active":"2026-03-09T21:16:41.698055+0000","last_peered":"2026-03-09T21:16:41.698055+0000","last_clean":"2026-03-09T21:16:41.698055+0000","last_became_active":"2026-03-09T21:16:17.156114+0000","last_became_peered":"2026-03-09T21:16:17.156114+0000","last_unstale":"2026-03-09T21:16:41.698055+0000","last_undegraded":"2026-03-09T21:16:41.698055+0000","last_fullsized":"2026-03-09T21:16:41.698055+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:53:58.467754+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,5],"acting":[4,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1a","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.213694+0000","last_change":"2026-03-09T21:16:10.225538+0000","last_active":"2026-03-09T21:16:42.213694+0000","last_peered":"2026-03-09T21:16:42.213694+0000","last_clean":"2026-03-09T21:16:42.213694+0000","last_became_active":"2026-03-09T21:16:10.224946+0000","last_became_peered":"2026-03-09T21:16:10.224946+0000","last_unstale":"2026-03-09T21:16:42.213694+0000","last_undegraded":"2026-03-09T21:16:42.213694+0000","last_fullsized":"2026-03-09T21:16:42.213694+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:57:58.282907+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.1b","version":"62'5","reported_seq":39,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.226536+0000","last_change":"2026-03-09T21:16:12.245556+0000","last_active":"2026-03-09T21:16:42.226536+0000","last_peered":"2026-03-09T21:16:42.226536+0000","last_clean":"2026-03-09T21:16:42.226536+0000","last_became_active":"2026-03-09T21:16:12.245445+0000","last_became_peered":"2026-03-09T21:16:12.245445+0000","last_unstale":"2026-03-09T21:16:42.226536+0000","last_undegraded":"2026-03-09T21:16:42.226536+0000","last_fullsized":"2026-03-09T21:16:42.226536+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:11:44.253286+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":11,"num_read_kb":7,"num_write":6,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,7],"acting":[0,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.1d","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.696495+0000","last_change":"2026-03-09T21:16:14.268395+0000","last_active":"2026-03-09T21:16:41.696495+0000","last_peered":"2026-03-09T21:16:41.696495+0000","last_clean":"2026-03-09T21:16:41.696495+0000","last_became_active":"2026-03-09T21:16:14.267468+0000","last_became_peered":"2026-03-09T21:16:14.267468+0000","last_unstale":"2026-03-09T21:16:41.696495+0000","last_undegraded":"2026-03-09T21:16:41.696495+0000","last_fullsized":"2026-03-09T21:16:41.696495+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:45:12.224442+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1f","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.702122+0000","last_change":"2026-03-09T21:16:17.158977+0000","last_active":"2026-03-09T21:16:41.702122+0000","last_peered":"2026-03-09T21:16:41.702122+0000","last_clean":"2026-03-09T21:16:41.702122+0000","last_became_active":"2026-03-09T21:16:17.158586+0000","last_became_peered":"2026-03-09T21:16:17.158586+0000","last_unstale":"2026-03-09T21:16:41.702122+0000","last_undegraded":"2026-03-09T21:16:41.702122+0000","last_fullsized":"2026-03-09T21:16:41.702122+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:42:36.578316+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1b","version":"55'1","reported_seq":41,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.702748+0000","last_change":"2026-03-09T21:16:10.224856+0000","last_active":"2026-03-09T21:16:41.702748+0000","last_peered":"2026-03-09T21:16:41.702748+0000","last_clean":"2026-03-09T21:16:41.702748+0000","last_became_active":"2026-03-09T21:16:10.224702+0000","last_became_peered":"2026-03-09T21:16:10.224702+0000","last_unstale":"2026-03-09T21:16:41.702748+0000","last_undegraded":"2026-03-09T21:16:41.702748+0000","last_fullsized":"2026-03-09T21:16:41.702748+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:39:29.706215+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":993,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1a","version":"62'9","reported_seq":45,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.698499+0000","last_change":"2026-03-09T21:16:12.234692+0000","last_active":"2026-03-09T21:16:41.698499+0000","last_peered":"2026-03-09T21:16:41.698499+0000","last_clean":"2026-03-09T21:16:41.698499+0000","last_became_active":"2026-03-09T21:16:12.234554+0000","last_became_peered":"2026-03-09T21:16:12.234554+0000","last_unstale":"2026-03-09T21:16:41.698499+0000","last_undegraded":"2026-03-09T21:16:41.698499+0000","last_fullsized":"2026-03-09T21:16:41.698499+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:04:47.560911+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1c","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.698514+0000","last_change":"2026-03-09T21:16:14.265930+0000","last_active":"2026-03-09T21:16:41.698514+0000","last_peered":"2026-03-09T21:16:41.698514+0000","last_clean":"2026-03-09T21:16:41.698514+0000","last_became_active":"2026-03-09T21:16:14.265782+0000","last_became_peered":"2026-03-09T21:16:14.265782+0000","last_unstale":"2026-03-09T21:16:41.698514+0000","last_undegraded":"2026-03-09T21:16:41.698514+0000","last_fullsized":"2026-03-09T21:16:41.698514+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:38:08.395343+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]}],"pool_stats":[{"poolid":6,"num_pg":32,"stat_sum":{"num_bytes":416,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":3,"num_read_kb":3,"num_write":3,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1248,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":2,"ondisk_log_size":2,"up":96,"acting":96,"num_store_stats":8},{"poolid":5,"num_pg":32,"stat_sum":{"num_bytes":0,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":88,"ondisk_log_size":88,"up":96,"acting":96,"num_store_stats":8},{"poolid":4,"num_pg":3,"stat_sum":{"num_bytes":408,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":67,"num_read_kb":62,"num_write":6,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1224,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":8,"ondisk_log_size":8,"up":9,"acting":9,"num_store_stats":7},{"poolid":3,"num_pg":32,"stat_sum":{"num_bytes":3702,"num_objects":178,"num_object_clones":0,"num_object_copies":534,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":178,"num_whiteouts":0,"num_read":701,"num_read_kb":458,"num_write":417,"num_write_kb":34,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":417792,"data_stored":11106,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":395,"ondisk_log_size":395,"up":96,"acting":96,"num_store_stats":8},{"poolid":2,"num_pg":32,"stat_sum":{"num_bytes":1613,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":34,"num_read_kb":34,"num_write":10,"num_write_kb":6,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":4839,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":6,"ondisk_log_size":6,"up":96,"acting":96,"num_store_stats":8},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":106,"num_read_kb":213,"num_write":69,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":39,"ondisk_log_size":39,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":7,"up_from":51,"seq":219043332117,"num_pgs":60,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27960,"kb_used_data":1124,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939464,"statfs":{"total":21470642176,"available":21442011136,"internally_reserved":0,"allocated":1150976,"data_stored":716500,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":44,"seq":188978561053,"num_pgs":42,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27924,"kb_used_data":1092,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939500,"statfs":{"total":21470642176,"available":21442048000,"internally_reserved":0,"allocated":1118208,"data_stored":714722,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":36,"seq":154618822692,"num_pgs":53,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27488,"kb_used_data":648,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939936,"statfs":{"total":21470642176,"available":21442494464,"internally_reserved":0,"allocated":663552,"data_stored":255300,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":30,"seq":128849018924,"num_pgs":56,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27516,"kb_used_data":676,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939908,"statfs":{"total":21470642176,"available":21442465792,"internally_reserved":0,"allocated":692224,"data_stored":255394,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":25,"seq":107374182450,"num_pgs":50,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27488,"kb_used_data":648,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939936,"statfs":{"total":21470642176,"available":21442494464,"internally_reserved":0,"allocated":663552,"data_stored":256278,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":18,"seq":77309411386,"num_pgs":38,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27480,"kb_used_data":640,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939944,"statfs":{"total":21470642176,"available":21442502656,"internally_reserved":0,"allocated":655360,"data_stored":254942,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574912,"num_pgs":47,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27492,"kb_used_data":652,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939932,"statfs":{"total":21470642176,"available":21442490368,"internally_reserved":0,"allocated":667648,"data_stored":254780,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738439,"num_pgs":50,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27944,"kb_used_data":1108,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939480,"statfs":{"total":21470642176,"available":21442027520,"internally_reserved":0,"allocated":1134592,"data_stored":714397,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":138,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":46,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":1475,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":138,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":436,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":1085,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":16384,"data_stored":1521,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1320,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":57344,"data_stored":1458,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1282,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":40960,"data_stored":1144,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":1980,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":45056,"data_stored":1172,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":40960,"data_stored":1100,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":61440,"data_stored":1650,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":416,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-09T21:17:30.233 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph pg dump --format=json 2026-03-09T21:17:31.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:30 vm10 bash[23387]: cluster 2026-03-09T21:17:29.730678+0000 mgr.y (mgr.24416) 63 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:31.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:30 vm10 bash[23387]: cluster 2026-03-09T21:17:29.730678+0000 mgr.y (mgr.24416) 63 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:31.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:30 vm07 bash[20771]: cluster 2026-03-09T21:17:29.730678+0000 mgr.y (mgr.24416) 63 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:31.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:30 vm07 bash[20771]: cluster 2026-03-09T21:17:29.730678+0000 mgr.y (mgr.24416) 63 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:31.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:30 vm07 bash[28052]: cluster 2026-03-09T21:17:29.730678+0000 mgr.y (mgr.24416) 63 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:31.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:30 vm07 bash[28052]: cluster 2026-03-09T21:17:29.730678+0000 mgr.y (mgr.24416) 63 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:32.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:31 vm10 bash[23387]: audit 2026-03-09T21:17:30.163315+0000 mgr.y (mgr.24416) 64 : audit [DBG] from='client.14625 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T21:17:32.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:31 vm10 bash[23387]: audit 2026-03-09T21:17:30.163315+0000 mgr.y (mgr.24416) 64 : audit [DBG] from='client.14625 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T21:17:32.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:31 vm07 bash[20771]: audit 2026-03-09T21:17:30.163315+0000 mgr.y (mgr.24416) 64 : audit [DBG] from='client.14625 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T21:17:32.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:31 vm07 bash[20771]: audit 2026-03-09T21:17:30.163315+0000 mgr.y (mgr.24416) 64 : audit [DBG] from='client.14625 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T21:17:32.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:31 vm07 bash[28052]: audit 2026-03-09T21:17:30.163315+0000 mgr.y (mgr.24416) 64 : audit [DBG] from='client.14625 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T21:17:32.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:31 vm07 bash[28052]: audit 2026-03-09T21:17:30.163315+0000 mgr.y (mgr.24416) 64 : audit [DBG] from='client.14625 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T21:17:33.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:32 vm10 bash[23387]: cluster 2026-03-09T21:17:31.730981+0000 mgr.y (mgr.24416) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T21:17:33.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:32 vm10 bash[23387]: cluster 2026-03-09T21:17:31.730981+0000 mgr.y (mgr.24416) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T21:17:33.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:32 vm07 bash[20771]: cluster 2026-03-09T21:17:31.730981+0000 mgr.y (mgr.24416) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T21:17:33.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:32 vm07 bash[20771]: cluster 2026-03-09T21:17:31.730981+0000 mgr.y (mgr.24416) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T21:17:33.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:32 vm07 bash[28052]: cluster 2026-03-09T21:17:31.730981+0000 mgr.y (mgr.24416) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T21:17:33.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:32 vm07 bash[28052]: cluster 2026-03-09T21:17:31.730981+0000 mgr.y (mgr.24416) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T21:17:34.928 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:17:35.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:34 vm10 bash[23387]: cluster 2026-03-09T21:17:33.731627+0000 mgr.y (mgr.24416) 66 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:35.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:34 vm10 bash[23387]: cluster 2026-03-09T21:17:33.731627+0000 mgr.y (mgr.24416) 66 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:35.203 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:17:35.204 INFO:teuthology.orchestra.run.vm07.stderr:dumped all 2026-03-09T21:17:35.216 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:34 vm07 bash[20771]: cluster 2026-03-09T21:17:33.731627+0000 mgr.y (mgr.24416) 66 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:35.216 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:34 vm07 bash[20771]: cluster 2026-03-09T21:17:33.731627+0000 mgr.y (mgr.24416) 66 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:35.216 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:34 vm07 bash[28052]: cluster 2026-03-09T21:17:33.731627+0000 mgr.y (mgr.24416) 66 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:35.216 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:34 vm07 bash[28052]: cluster 2026-03-09T21:17:33.731627+0000 mgr.y (mgr.24416) 66 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:35.271 INFO:teuthology.orchestra.run.vm07.stdout:{"pg_ready":true,"pg_map":{"version":29,"stamp":"2026-03-09T21:17:33.731186+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":465419,"num_objects":199,"num_object_clones":0,"num_object_copies":597,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":199,"num_whiteouts":0,"num_read":916,"num_read_kb":775,"num_write":505,"num_write_kb":629,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":538,"ondisk_log_size":538,"up":396,"acting":396,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":396,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":8,"kb":167739392,"kb_used":221292,"kb_used_data":6588,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167518100,"statfs":{"total":171765137408,"available":171538534400,"internally_reserved":0,"allocated":6746112,"data_stored":3422313,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12710,"internal_metadata":219663962},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":14,"num_read_kb":14,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.002612"},"pg_stats":[{"pgid":"6.1b","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.707276+0000","last_change":"2026-03-09T21:16:17.158946+0000","last_active":"2026-03-09T21:16:41.707276+0000","last_peered":"2026-03-09T21:16:41.707276+0000","last_clean":"2026-03-09T21:16:41.707276+0000","last_became_active":"2026-03-09T21:16:17.158240+0000","last_became_peered":"2026-03-09T21:16:17.158240+0000","last_unstale":"2026-03-09T21:16:41.707276+0000","last_undegraded":"2026-03-09T21:16:41.707276+0000","last_fullsized":"2026-03-09T21:16:41.707276+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:29:11.157141+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1f","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.225923+0000","last_change":"2026-03-09T21:16:10.226478+0000","last_active":"2026-03-09T21:16:42.225923+0000","last_peered":"2026-03-09T21:16:42.225923+0000","last_clean":"2026-03-09T21:16:42.225923+0000","last_became_active":"2026-03-09T21:16:10.226353+0000","last_became_peered":"2026-03-09T21:16:10.226353+0000","last_unstale":"2026-03-09T21:16:42.225923+0000","last_undegraded":"2026-03-09T21:16:42.225923+0000","last_fullsized":"2026-03-09T21:16:42.225923+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:42:13.558560+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,4],"acting":[0,7,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1e","version":"62'10","reported_seq":44,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.707322+0000","last_change":"2026-03-09T21:16:12.239920+0000","last_active":"2026-03-09T21:16:41.707322+0000","last_peered":"2026-03-09T21:16:41.707322+0000","last_clean":"2026-03-09T21:16:41.707322+0000","last_became_active":"2026-03-09T21:16:12.239835+0000","last_became_peered":"2026-03-09T21:16:12.239835+0000","last_unstale":"2026-03-09T21:16:41.707322+0000","last_undegraded":"2026-03-09T21:16:41.707322+0000","last_fullsized":"2026-03-09T21:16:41.707322+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:10:40.718574+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,2],"acting":[3,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.18","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.698096+0000","last_change":"2026-03-09T21:16:14.265529+0000","last_active":"2026-03-09T21:16:41.698096+0000","last_peered":"2026-03-09T21:16:41.698096+0000","last_clean":"2026-03-09T21:16:41.698096+0000","last_became_active":"2026-03-09T21:16:14.265424+0000","last_became_peered":"2026-03-09T21:16:14.265424+0000","last_unstale":"2026-03-09T21:16:41.698096+0000","last_undegraded":"2026-03-09T21:16:41.698096+0000","last_fullsized":"2026-03-09T21:16:41.698096+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:34:14.049193+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1e","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.702053+0000","last_change":"2026-03-09T21:16:10.209450+0000","last_active":"2026-03-09T21:16:41.702053+0000","last_peered":"2026-03-09T21:16:41.702053+0000","last_clean":"2026-03-09T21:16:41.702053+0000","last_became_active":"2026-03-09T21:16:10.209260+0000","last_became_peered":"2026-03-09T21:16:10.209260+0000","last_unstale":"2026-03-09T21:16:41.702053+0000","last_undegraded":"2026-03-09T21:16:41.702053+0000","last_fullsized":"2026-03-09T21:16:41.702053+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:51:55.928780+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1f","version":"62'11","reported_seq":48,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.225814+0000","last_change":"2026-03-09T21:16:12.245039+0000","last_active":"2026-03-09T21:16:42.225814+0000","last_peered":"2026-03-09T21:16:42.225814+0000","last_clean":"2026-03-09T21:16:42.225814+0000","last_became_active":"2026-03-09T21:16:12.244951+0000","last_became_peered":"2026-03-09T21:16:12.244951+0000","last_unstale":"2026-03-09T21:16:42.225814+0000","last_undegraded":"2026-03-09T21:16:42.225814+0000","last_fullsized":"2026-03-09T21:16:42.225814+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:04:53.656668+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,2],"acting":[0,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.19","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.696585+0000","last_change":"2026-03-09T21:16:14.291218+0000","last_active":"2026-03-09T21:16:41.696585+0000","last_peered":"2026-03-09T21:16:41.696585+0000","last_clean":"2026-03-09T21:16:41.696585+0000","last_became_active":"2026-03-09T21:16:14.291087+0000","last_became_peered":"2026-03-09T21:16:14.291087+0000","last_unstale":"2026-03-09T21:16:41.696585+0000","last_undegraded":"2026-03-09T21:16:41.696585+0000","last_fullsized":"2026-03-09T21:16:41.696585+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:59:08.923161+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,7],"acting":[1,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1a","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.697051+0000","last_change":"2026-03-09T21:16:16.272940+0000","last_active":"2026-03-09T21:16:41.697051+0000","last_peered":"2026-03-09T21:16:41.697051+0000","last_clean":"2026-03-09T21:16:41.697051+0000","last_became_active":"2026-03-09T21:16:16.272824+0000","last_became_peered":"2026-03-09T21:16:16.272824+0000","last_unstale":"2026-03-09T21:16:41.697051+0000","last_undegraded":"2026-03-09T21:16:41.697051+0000","last_fullsized":"2026-03-09T21:16:41.697051+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:42:46.466635+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,1],"acting":[4,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1d","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.323215+0000","last_change":"2026-03-09T21:16:10.230625+0000","last_active":"2026-03-09T21:16:42.323215+0000","last_peered":"2026-03-09T21:16:42.323215+0000","last_clean":"2026-03-09T21:16:42.323215+0000","last_became_active":"2026-03-09T21:16:10.222805+0000","last_became_peered":"2026-03-09T21:16:10.222805+0000","last_unstale":"2026-03-09T21:16:42.323215+0000","last_undegraded":"2026-03-09T21:16:42.323215+0000","last_fullsized":"2026-03-09T21:16:42.323215+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:17:20.278109+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,0],"acting":[7,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1c","version":"62'15","reported_seq":54,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.321899+0000","last_change":"2026-03-09T21:16:12.233737+0000","last_active":"2026-03-09T21:16:42.321899+0000","last_peered":"2026-03-09T21:16:42.321899+0000","last_clean":"2026-03-09T21:16:42.321899+0000","last_became_active":"2026-03-09T21:16:12.233527+0000","last_became_peered":"2026-03-09T21:16:12.233527+0000","last_unstale":"2026-03-09T21:16:42.321899+0000","last_undegraded":"2026-03-09T21:16:42.321899+0000","last_fullsized":"2026-03-09T21:16:42.321899+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:26:30.048144+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,1],"acting":[5,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1a","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.325623+0000","last_change":"2026-03-09T21:16:14.303232+0000","last_active":"2026-03-09T21:16:42.325623+0000","last_peered":"2026-03-09T21:16:42.325623+0000","last_clean":"2026-03-09T21:16:42.325623+0000","last_became_active":"2026-03-09T21:16:14.302938+0000","last_became_peered":"2026-03-09T21:16:14.302938+0000","last_unstale":"2026-03-09T21:16:42.325623+0000","last_undegraded":"2026-03-09T21:16:42.325623+0000","last_fullsized":"2026-03-09T21:16:42.325623+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:52:53.558981+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.19","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.321836+0000","last_change":"2026-03-09T21:16:16.270387+0000","last_active":"2026-03-09T21:16:42.321836+0000","last_peered":"2026-03-09T21:16:42.321836+0000","last_clean":"2026-03-09T21:16:42.321836+0000","last_became_active":"2026-03-09T21:16:16.270201+0000","last_became_peered":"2026-03-09T21:16:16.270201+0000","last_unstale":"2026-03-09T21:16:42.321836+0000","last_undegraded":"2026-03-09T21:16:42.321836+0000","last_fullsized":"2026-03-09T21:16:42.321836+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:34:04.006503+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,3],"acting":[5,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.1c","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.324930+0000","last_change":"2026-03-09T21:16:10.230498+0000","last_active":"2026-03-09T21:16:42.324930+0000","last_peered":"2026-03-09T21:16:42.324930+0000","last_clean":"2026-03-09T21:16:42.324930+0000","last_became_active":"2026-03-09T21:16:10.224476+0000","last_became_peered":"2026-03-09T21:16:10.224476+0000","last_unstale":"2026-03-09T21:16:42.324930+0000","last_undegraded":"2026-03-09T21:16:42.324930+0000","last_fullsized":"2026-03-09T21:16:42.324930+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:15:22.994320+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1d","version":"62'12","reported_seq":52,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.321020+0000","last_change":"2026-03-09T21:16:12.229777+0000","last_active":"2026-03-09T21:16:42.321020+0000","last_peered":"2026-03-09T21:16:42.321020+0000","last_clean":"2026-03-09T21:16:42.321020+0000","last_became_active":"2026-03-09T21:16:12.229677+0000","last_became_peered":"2026-03-09T21:16:12.229677+0000","last_unstale":"2026-03-09T21:16:42.321020+0000","last_undegraded":"2026-03-09T21:16:42.321020+0000","last_fullsized":"2026-03-09T21:16:42.321020+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:29:57.542544+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1b","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.320756+0000","last_change":"2026-03-09T21:16:14.282348+0000","last_active":"2026-03-09T21:16:42.320756+0000","last_peered":"2026-03-09T21:16:42.320756+0000","last_clean":"2026-03-09T21:16:42.320756+0000","last_became_active":"2026-03-09T21:16:14.281898+0000","last_became_peered":"2026-03-09T21:16:14.281898+0000","last_unstale":"2026-03-09T21:16:42.320756+0000","last_undegraded":"2026-03-09T21:16:42.320756+0000","last_fullsized":"2026-03-09T21:16:42.320756+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:28:23.086143+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,0,7],"acting":[5,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.18","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.225429+0000","last_change":"2026-03-09T21:16:16.277041+0000","last_active":"2026-03-09T21:16:42.225429+0000","last_peered":"2026-03-09T21:16:42.225429+0000","last_clean":"2026-03-09T21:16:42.225429+0000","last_became_active":"2026-03-09T21:16:16.275293+0000","last_became_peered":"2026-03-09T21:16:16.275293+0000","last_unstale":"2026-03-09T21:16:42.225429+0000","last_undegraded":"2026-03-09T21:16:42.225429+0000","last_fullsized":"2026-03-09T21:16:42.225429+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:19:43.875198+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,7],"acting":[0,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.a","version":"62'19","reported_seq":60,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.213814+0000","last_change":"2026-03-09T21:16:12.254801+0000","last_active":"2026-03-09T21:16:42.213814+0000","last_peered":"2026-03-09T21:16:42.213814+0000","last_clean":"2026-03-09T21:16:42.213814+0000","last_became_active":"2026-03-09T21:16:12.248116+0000","last_became_peered":"2026-03-09T21:16:12.248116+0000","last_unstale":"2026-03-09T21:16:42.213814+0000","last_undegraded":"2026-03-09T21:16:42.213814+0000","last_fullsized":"2026-03-09T21:16:42.213814+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:44:20.112832+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":9,"num_object_clones":0,"num_object_copies":27,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":9,"num_whiteouts":0,"num_read":32,"num_read_kb":21,"num_write":20,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"2.b","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.323327+0000","last_change":"2026-03-09T21:16:10.230055+0000","last_active":"2026-03-09T21:16:42.323327+0000","last_peered":"2026-03-09T21:16:42.323327+0000","last_clean":"2026-03-09T21:16:42.323327+0000","last_became_active":"2026-03-09T21:16:10.225603+0000","last_became_peered":"2026-03-09T21:16:10.225603+0000","last_unstale":"2026-03-09T21:16:42.323327+0000","last_undegraded":"2026-03-09T21:16:42.323327+0000","last_fullsized":"2026-03-09T21:16:42.323327+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:45:18.737318+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,5],"acting":[7,4,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.c","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.699826+0000","last_change":"2026-03-09T21:16:14.265321+0000","last_active":"2026-03-09T21:16:41.699826+0000","last_peered":"2026-03-09T21:16:41.699826+0000","last_clean":"2026-03-09T21:16:41.699826+0000","last_became_active":"2026-03-09T21:16:14.264421+0000","last_became_peered":"2026-03-09T21:16:14.264421+0000","last_unstale":"2026-03-09T21:16:41.699826+0000","last_undegraded":"2026-03-09T21:16:41.699826+0000","last_fullsized":"2026-03-09T21:16:41.699826+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:27:09.495584+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.f","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.320587+0000","last_change":"2026-03-09T21:16:16.274797+0000","last_active":"2026-03-09T21:16:42.320587+0000","last_peered":"2026-03-09T21:16:42.320587+0000","last_clean":"2026-03-09T21:16:42.320587+0000","last_became_active":"2026-03-09T21:16:16.274606+0000","last_became_peered":"2026-03-09T21:16:16.274606+0000","last_unstale":"2026-03-09T21:16:42.320587+0000","last_undegraded":"2026-03-09T21:16:42.320587+0000","last_fullsized":"2026-03-09T21:16:42.320587+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:36:26.687182+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,4],"acting":[2,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"3.b","version":"62'9","reported_seq":45,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.701705+0000","last_change":"2026-03-09T21:16:12.245416+0000","last_active":"2026-03-09T21:16:41.701705+0000","last_peered":"2026-03-09T21:16:41.701705+0000","last_clean":"2026-03-09T21:16:41.701705+0000","last_became_active":"2026-03-09T21:16:12.245076+0000","last_became_peered":"2026-03-09T21:16:12.245076+0000","last_unstale":"2026-03-09T21:16:41.701705+0000","last_undegraded":"2026-03-09T21:16:41.701705+0000","last_fullsized":"2026-03-09T21:16:41.701705+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:07:39.100162+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,4],"acting":[3,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.a","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.698894+0000","last_change":"2026-03-09T21:16:10.230680+0000","last_active":"2026-03-09T21:16:41.698894+0000","last_peered":"2026-03-09T21:16:41.698894+0000","last_clean":"2026-03-09T21:16:41.698894+0000","last_became_active":"2026-03-09T21:16:10.230530+0000","last_became_peered":"2026-03-09T21:16:10.230530+0000","last_unstale":"2026-03-09T21:16:41.698894+0000","last_undegraded":"2026-03-09T21:16:41.698894+0000","last_fullsized":"2026-03-09T21:16:41.698894+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:23:50.501252+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,7],"acting":[1,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.d","version":"63'11","reported_seq":52,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:17:25.294453+0000","last_change":"2026-03-09T21:16:14.274506+0000","last_active":"2026-03-09T21:17:25.294453+0000","last_peered":"2026-03-09T21:17:25.294453+0000","last_clean":"2026-03-09T21:17:25.294453+0000","last_became_active":"2026-03-09T21:16:14.273735+0000","last_became_peered":"2026-03-09T21:16:14.273735+0000","last_unstale":"2026-03-09T21:17:25.294453+0000","last_undegraded":"2026-03-09T21:17:25.294453+0000","last_fullsized":"2026-03-09T21:17:25.294453+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:24:23.354182+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,7,5],"acting":[2,7,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.e","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.698785+0000","last_change":"2026-03-09T21:16:16.272358+0000","last_active":"2026-03-09T21:16:41.698785+0000","last_peered":"2026-03-09T21:16:41.698785+0000","last_clean":"2026-03-09T21:16:41.698785+0000","last_became_active":"2026-03-09T21:16:16.272216+0000","last_became_peered":"2026-03-09T21:16:16.272216+0000","last_unstale":"2026-03-09T21:16:41.698785+0000","last_undegraded":"2026-03-09T21:16:41.698785+0000","last_fullsized":"2026-03-09T21:16:41.698785+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:59:11.843479+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.8","version":"62'15","reported_seq":54,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.701788+0000","last_change":"2026-03-09T21:16:12.237751+0000","last_active":"2026-03-09T21:16:41.701788+0000","last_peered":"2026-03-09T21:16:41.701788+0000","last_clean":"2026-03-09T21:16:41.701788+0000","last_became_active":"2026-03-09T21:16:12.237626+0000","last_became_peered":"2026-03-09T21:16:12.237626+0000","last_unstale":"2026-03-09T21:16:41.701788+0000","last_undegraded":"2026-03-09T21:16:41.701788+0000","last_fullsized":"2026-03-09T21:16:41.701788+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:32:16.399457+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.9","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.698967+0000","last_change":"2026-03-09T21:16:10.230301+0000","last_active":"2026-03-09T21:16:41.698967+0000","last_peered":"2026-03-09T21:16:41.698967+0000","last_clean":"2026-03-09T21:16:41.698967+0000","last_became_active":"2026-03-09T21:16:10.230135+0000","last_became_peered":"2026-03-09T21:16:10.230135+0000","last_unstale":"2026-03-09T21:16:41.698967+0000","last_undegraded":"2026-03-09T21:16:41.698967+0000","last_fullsized":"2026-03-09T21:16:41.698967+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:45:31.433139+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,7,3],"acting":[1,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.e","version":"63'11","reported_seq":52,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:17:25.294236+0000","last_change":"2026-03-09T21:16:14.285289+0000","last_active":"2026-03-09T21:17:25.294236+0000","last_peered":"2026-03-09T21:17:25.294236+0000","last_clean":"2026-03-09T21:17:25.294236+0000","last_became_active":"2026-03-09T21:16:14.285156+0000","last_became_peered":"2026-03-09T21:16:14.285156+0000","last_unstale":"2026-03-09T21:17:25.294236+0000","last_undegraded":"2026-03-09T21:17:25.294236+0000","last_fullsized":"2026-03-09T21:17:25.294236+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:49:46.586928+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,0],"acting":[4,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.d","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.322154+0000","last_change":"2026-03-09T21:16:16.280255+0000","last_active":"2026-03-09T21:16:42.322154+0000","last_peered":"2026-03-09T21:16:42.322154+0000","last_clean":"2026-03-09T21:16:42.322154+0000","last_became_active":"2026-03-09T21:16:16.280138+0000","last_became_peered":"2026-03-09T21:16:16.280138+0000","last_unstale":"2026-03-09T21:16:42.322154+0000","last_undegraded":"2026-03-09T21:16:42.322154+0000","last_fullsized":"2026-03-09T21:16:42.322154+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:35:49.324454+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.9","version":"62'12","reported_seq":52,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.697438+0000","last_change":"2026-03-09T21:16:12.224976+0000","last_active":"2026-03-09T21:16:41.697438+0000","last_peered":"2026-03-09T21:16:41.697438+0000","last_clean":"2026-03-09T21:16:41.697438+0000","last_became_active":"2026-03-09T21:16:12.224799+0000","last_became_peered":"2026-03-09T21:16:12.224799+0000","last_unstale":"2026-03-09T21:16:41.697438+0000","last_undegraded":"2026-03-09T21:16:41.697438+0000","last_fullsized":"2026-03-09T21:16:41.697438+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:05:31.055356+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,7],"acting":[4,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.8","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.323354+0000","last_change":"2026-03-09T21:16:10.226545+0000","last_active":"2026-03-09T21:16:42.323354+0000","last_peered":"2026-03-09T21:16:42.323354+0000","last_clean":"2026-03-09T21:16:42.323354+0000","last_became_active":"2026-03-09T21:16:10.222457+0000","last_became_peered":"2026-03-09T21:16:10.222457+0000","last_unstale":"2026-03-09T21:16:42.323354+0000","last_undegraded":"2026-03-09T21:16:42.323354+0000","last_fullsized":"2026-03-09T21:16:42.323354+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:51:28.666609+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,1],"acting":[7,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.f","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.322285+0000","last_change":"2026-03-09T21:16:14.266661+0000","last_active":"2026-03-09T21:16:42.322285+0000","last_peered":"2026-03-09T21:16:42.322285+0000","last_clean":"2026-03-09T21:16:42.322285+0000","last_became_active":"2026-03-09T21:16:14.266567+0000","last_became_peered":"2026-03-09T21:16:14.266567+0000","last_unstale":"2026-03-09T21:16:42.322285+0000","last_undegraded":"2026-03-09T21:16:42.322285+0000","last_fullsized":"2026-03-09T21:16:42.322285+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:17:48.388987+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.c","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.701075+0000","last_change":"2026-03-09T21:16:17.159081+0000","last_active":"2026-03-09T21:16:41.701075+0000","last_peered":"2026-03-09T21:16:41.701075+0000","last_clean":"2026-03-09T21:16:41.701075+0000","last_became_active":"2026-03-09T21:16:17.158937+0000","last_became_peered":"2026-03-09T21:16:17.158937+0000","last_unstale":"2026-03-09T21:16:41.701075+0000","last_undegraded":"2026-03-09T21:16:41.701075+0000","last_fullsized":"2026-03-09T21:16:41.701075+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:51:30.900661+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.6","version":"62'12","reported_seq":47,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.226414+0000","last_change":"2026-03-09T21:16:12.244509+0000","last_active":"2026-03-09T21:16:42.226414+0000","last_peered":"2026-03-09T21:16:42.226414+0000","last_clean":"2026-03-09T21:16:42.226414+0000","last_became_active":"2026-03-09T21:16:12.244220+0000","last_became_peered":"2026-03-09T21:16:12.244220+0000","last_unstale":"2026-03-09T21:16:42.226414+0000","last_undegraded":"2026-03-09T21:16:42.226414+0000","last_fullsized":"2026-03-09T21:16:42.226414+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:39:26.095125+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":18,"num_read_kb":12,"num_write":12,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.7","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.214082+0000","last_change":"2026-03-09T21:16:10.226264+0000","last_active":"2026-03-09T21:16:42.214082+0000","last_peered":"2026-03-09T21:16:42.214082+0000","last_clean":"2026-03-09T21:16:42.214082+0000","last_became_active":"2026-03-09T21:16:10.226138+0000","last_became_peered":"2026-03-09T21:16:10.226138+0000","last_unstale":"2026-03-09T21:16:42.214082+0000","last_undegraded":"2026-03-09T21:16:42.214082+0000","last_fullsized":"2026-03-09T21:16:42.214082+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:27:03.455796+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,7,2],"acting":[6,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.1","version":"62'1","reported_seq":35,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.698691+0000","last_change":"2026-03-09T21:16:19.400305+0000","last_active":"2026-03-09T21:16:41.698691+0000","last_peered":"2026-03-09T21:16:41.698691+0000","last_clean":"2026-03-09T21:16:41.698691+0000","last_became_active":"2026-03-09T21:16:13.245150+0000","last_became_peered":"2026-03-09T21:16:13.245150+0000","last_unstale":"2026-03-09T21:16:41.698691+0000","last_undegraded":"2026-03-09T21:16:41.698691+0000","last_fullsized":"2026-03-09T21:16:41.698691+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:12.191428+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:12.191428+0000","last_clean_scrub_stamp":"2026-03-09T21:16:12.191428+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:16:56.257340+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000435506,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,6],"acting":[4,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.0","version":"63'11","reported_seq":52,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:17:25.294431+0000","last_change":"2026-03-09T21:16:14.266070+0000","last_active":"2026-03-09T21:17:25.294431+0000","last_peered":"2026-03-09T21:17:25.294431+0000","last_clean":"2026-03-09T21:17:25.294431+0000","last_became_active":"2026-03-09T21:16:14.265871+0000","last_became_peered":"2026-03-09T21:16:14.265871+0000","last_unstale":"2026-03-09T21:17:25.294431+0000","last_undegraded":"2026-03-09T21:17:25.294431+0000","last_fullsized":"2026-03-09T21:17:25.294431+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:29:35.855617+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.3","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.324114+0000","last_change":"2026-03-09T21:16:17.152221+0000","last_active":"2026-03-09T21:16:42.324114+0000","last_peered":"2026-03-09T21:16:42.324114+0000","last_clean":"2026-03-09T21:16:42.324114+0000","last_became_active":"2026-03-09T21:16:17.151708+0000","last_became_peered":"2026-03-09T21:16:17.151708+0000","last_unstale":"2026-03-09T21:16:42.324114+0000","last_undegraded":"2026-03-09T21:16:42.324114+0000","last_fullsized":"2026-03-09T21:16:42.324114+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:39:06.428621+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,2],"acting":[7,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.7","version":"62'13","reported_seq":56,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.701368+0000","last_change":"2026-03-09T21:16:12.243013+0000","last_active":"2026-03-09T21:16:41.701368+0000","last_peered":"2026-03-09T21:16:41.701368+0000","last_clean":"2026-03-09T21:16:41.701368+0000","last_became_active":"2026-03-09T21:16:12.242734+0000","last_became_peered":"2026-03-09T21:16:12.242734+0000","last_unstale":"2026-03-09T21:16:41.701368+0000","last_undegraded":"2026-03-09T21:16:41.701368+0000","last_fullsized":"2026-03-09T21:16:41.701368+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":13,"log_dups_size":0,"ondisk_log_size":13,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:31:27.056422+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":30,"num_read_kb":19,"num_write":16,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.6","version":"55'1","reported_seq":34,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.699042+0000","last_change":"2026-03-09T21:16:10.219855+0000","last_active":"2026-03-09T21:16:41.699042+0000","last_peered":"2026-03-09T21:16:41.699042+0000","last_clean":"2026-03-09T21:16:41.699042+0000","last_became_active":"2026-03-09T21:16:10.219721+0000","last_became_peered":"2026-03-09T21:16:10.219721+0000","last_unstale":"2026-03-09T21:16:41.699042+0000","last_undegraded":"2026-03-09T21:16:41.699042+0000","last_fullsized":"2026-03-09T21:16:41.699042+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:50:12.205240+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,4],"acting":[1,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.0","version":"64'5","reported_seq":109,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:17:31.262055+0000","last_change":"2026-03-09T21:16:19.401140+0000","last_active":"2026-03-09T21:17:31.262055+0000","last_peered":"2026-03-09T21:17:31.262055+0000","last_clean":"2026-03-09T21:17:31.262055+0000","last_became_active":"2026-03-09T21:16:13.262951+0000","last_became_peered":"2026-03-09T21:16:13.262951+0000","last_unstale":"2026-03-09T21:17:31.262055+0000","last_undegraded":"2026-03-09T21:17:31.262055+0000","last_fullsized":"2026-03-09T21:17:31.262055+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:12.191428+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:12.191428+0000","last_clean_scrub_stamp":"2026-03-09T21:16:12.191428+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:20:38.620893+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00093184599999999995,"stat_sum":{"num_bytes":389,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":72,"num_read_kb":67,"num_write":4,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.697860+0000","last_change":"2026-03-09T21:16:14.282500+0000","last_active":"2026-03-09T21:16:41.697860+0000","last_peered":"2026-03-09T21:16:41.697860+0000","last_clean":"2026-03-09T21:16:41.697860+0000","last_became_active":"2026-03-09T21:16:14.282399+0000","last_became_peered":"2026-03-09T21:16:14.282399+0000","last_unstale":"2026-03-09T21:16:41.697860+0000","last_undegraded":"2026-03-09T21:16:41.697860+0000","last_fullsized":"2026-03-09T21:16:41.697860+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:04:01.824890+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,7],"acting":[4,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.2","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.697785+0000","last_change":"2026-03-09T21:16:16.276282+0000","last_active":"2026-03-09T21:16:41.697785+0000","last_peered":"2026-03-09T21:16:41.697785+0000","last_clean":"2026-03-09T21:16:41.697785+0000","last_became_active":"2026-03-09T21:16:16.273725+0000","last_became_peered":"2026-03-09T21:16:16.273725+0000","last_unstale":"2026-03-09T21:16:41.697785+0000","last_undegraded":"2026-03-09T21:16:41.697785+0000","last_fullsized":"2026-03-09T21:16:41.697785+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:24:00.894516+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.4","version":"63'30","reported_seq":96,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:17:25.294998+0000","last_change":"2026-03-09T21:16:12.237652+0000","last_active":"2026-03-09T21:17:25.294998+0000","last_peered":"2026-03-09T21:17:25.294998+0000","last_clean":"2026-03-09T21:17:25.294998+0000","last_became_active":"2026-03-09T21:16:12.237544+0000","last_became_peered":"2026-03-09T21:16:12.237544+0000","last_unstale":"2026-03-09T21:17:25.294998+0000","last_undegraded":"2026-03-09T21:17:25.294998+0000","last_fullsized":"2026-03-09T21:17:25.294998+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":30,"log_dups_size":0,"ondisk_log_size":30,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:13:30.396330+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":358,"num_objects":10,"num_object_clones":0,"num_object_copies":30,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":10,"num_whiteouts":0,"num_read":51,"num_read_kb":36,"num_write":26,"num_write_kb":4,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,5],"acting":[1,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.5","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.323420+0000","last_change":"2026-03-09T21:16:10.232495+0000","last_active":"2026-03-09T21:16:42.323420+0000","last_peered":"2026-03-09T21:16:42.323420+0000","last_clean":"2026-03-09T21:16:42.323420+0000","last_became_active":"2026-03-09T21:16:10.222432+0000","last_became_peered":"2026-03-09T21:16:10.222432+0000","last_unstale":"2026-03-09T21:16:42.323420+0000","last_undegraded":"2026-03-09T21:16:42.323420+0000","last_fullsized":"2026-03-09T21:16:42.323420+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:30:50.317955+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,4],"acting":[7,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.2","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.214588+0000","last_change":"2026-03-09T21:16:14.259163+0000","last_active":"2026-03-09T21:16:42.214588+0000","last_peered":"2026-03-09T21:16:42.214588+0000","last_clean":"2026-03-09T21:16:42.214588+0000","last_became_active":"2026-03-09T21:16:14.259025+0000","last_became_peered":"2026-03-09T21:16:14.259025+0000","last_unstale":"2026-03-09T21:16:42.214588+0000","last_undegraded":"2026-03-09T21:16:42.214588+0000","last_fullsized":"2026-03-09T21:16:42.214588+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:52:51.030315+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.698058+0000","last_change":"2026-03-09T21:16:17.152272+0000","last_active":"2026-03-09T21:16:41.698058+0000","last_peered":"2026-03-09T21:16:41.698058+0000","last_clean":"2026-03-09T21:16:41.698058+0000","last_became_active":"2026-03-09T21:16:17.152023+0000","last_became_peered":"2026-03-09T21:16:17.152023+0000","last_unstale":"2026-03-09T21:16:41.698058+0000","last_undegraded":"2026-03-09T21:16:41.698058+0000","last_fullsized":"2026-03-09T21:16:41.698058+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:57:09.225043+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.5","version":"62'16","reported_seq":68,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:17:25.294783+0000","last_change":"2026-03-09T21:16:12.222701+0000","last_active":"2026-03-09T21:17:25.294783+0000","last_peered":"2026-03-09T21:17:25.294783+0000","last_clean":"2026-03-09T21:17:25.294783+0000","last_became_active":"2026-03-09T21:16:12.222402+0000","last_became_peered":"2026-03-09T21:16:12.222402+0000","last_unstale":"2026-03-09T21:17:25.294783+0000","last_undegraded":"2026-03-09T21:17:25.294783+0000","last_fullsized":"2026-03-09T21:17:25.294783+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":16,"log_dups_size":0,"ondisk_log_size":16,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:07:26.664155+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":154,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":25,"num_read_kb":15,"num_write":13,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,2],"acting":[5,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.4","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.699094+0000","last_change":"2026-03-09T21:16:10.230597+0000","last_active":"2026-03-09T21:16:41.699094+0000","last_peered":"2026-03-09T21:16:41.699094+0000","last_clean":"2026-03-09T21:16:41.699094+0000","last_became_active":"2026-03-09T21:16:10.230105+0000","last_became_peered":"2026-03-09T21:16:10.230105+0000","last_unstale":"2026-03-09T21:16:41.699094+0000","last_undegraded":"2026-03-09T21:16:41.699094+0000","last_fullsized":"2026-03-09T21:16:41.699094+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:54:43.876223+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0,7],"acting":[1,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.2","version":"64'2","reported_seq":36,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.699145+0000","last_change":"2026-03-09T21:16:19.410663+0000","last_active":"2026-03-09T21:16:41.699145+0000","last_peered":"2026-03-09T21:16:41.699145+0000","last_clean":"2026-03-09T21:16:41.699145+0000","last_became_active":"2026-03-09T21:16:13.247614+0000","last_became_peered":"2026-03-09T21:16:13.247614+0000","last_unstale":"2026-03-09T21:16:41.699145+0000","last_undegraded":"2026-03-09T21:16:41.699145+0000","last_fullsized":"2026-03-09T21:16:41.699145+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:12.191428+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:12.191428+0000","last_clean_scrub_stamp":"2026-03-09T21:16:12.191428+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:32:53.049064+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.0010782089999999999,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.3","version":"63'11","reported_seq":52,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:17:25.295293+0000","last_change":"2026-03-09T21:16:14.260466+0000","last_active":"2026-03-09T21:17:25.295293+0000","last_peered":"2026-03-09T21:17:25.295293+0000","last_clean":"2026-03-09T21:17:25.295293+0000","last_became_active":"2026-03-09T21:16:14.260359+0000","last_became_peered":"2026-03-09T21:16:14.260359+0000","last_unstale":"2026-03-09T21:17:25.295293+0000","last_undegraded":"2026-03-09T21:17:25.295293+0000","last_fullsized":"2026-03-09T21:17:25.295293+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:02:23.995526+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,6,5],"acting":[0,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.0","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.225628+0000","last_change":"2026-03-09T21:16:16.275442+0000","last_active":"2026-03-09T21:16:42.225628+0000","last_peered":"2026-03-09T21:16:42.225628+0000","last_clean":"2026-03-09T21:16:42.225628+0000","last_became_active":"2026-03-09T21:16:16.270305+0000","last_became_peered":"2026-03-09T21:16:16.270305+0000","last_unstale":"2026-03-09T21:16:42.225628+0000","last_undegraded":"2026-03-09T21:16:42.225628+0000","last_fullsized":"2026-03-09T21:16:42.225628+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:27:03.391196+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,2],"acting":[0,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.3","version":"62'19","reported_seq":65,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.697593+0000","last_change":"2026-03-09T21:16:12.232513+0000","last_active":"2026-03-09T21:16:41.697593+0000","last_peered":"2026-03-09T21:16:41.697593+0000","last_clean":"2026-03-09T21:16:41.697593+0000","last_became_active":"2026-03-09T21:16:12.232399+0000","last_became_peered":"2026-03-09T21:16:12.232399+0000","last_unstale":"2026-03-09T21:16:41.697593+0000","last_undegraded":"2026-03-09T21:16:41.697593+0000","last_fullsized":"2026-03-09T21:16:41.697593+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:08:04.524835+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":39,"num_read_kb":25,"num_write":22,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,6],"acting":[4,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.2","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.320292+0000","last_change":"2026-03-09T21:16:10.217238+0000","last_active":"2026-03-09T21:16:42.320292+0000","last_peered":"2026-03-09T21:16:42.320292+0000","last_clean":"2026-03-09T21:16:42.320292+0000","last_became_active":"2026-03-09T21:16:10.216498+0000","last_became_peered":"2026-03-09T21:16:10.216498+0000","last_unstale":"2026-03-09T21:16:42.320292+0000","last_undegraded":"2026-03-09T21:16:42.320292+0000","last_fullsized":"2026-03-09T21:16:42.320292+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:49:56.184937+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,6],"acting":[5,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.5","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.225966+0000","last_change":"2026-03-09T21:16:14.267067+0000","last_active":"2026-03-09T21:16:42.225966+0000","last_peered":"2026-03-09T21:16:42.225966+0000","last_clean":"2026-03-09T21:16:42.225966+0000","last_became_active":"2026-03-09T21:16:14.266634+0000","last_became_peered":"2026-03-09T21:16:14.266634+0000","last_unstale":"2026-03-09T21:16:42.225966+0000","last_undegraded":"2026-03-09T21:16:42.225966+0000","last_fullsized":"2026-03-09T21:16:42.225966+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:21:46.006225+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.6","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.701160+0000","last_change":"2026-03-09T21:16:16.273616+0000","last_active":"2026-03-09T21:16:41.701160+0000","last_peered":"2026-03-09T21:16:41.701160+0000","last_clean":"2026-03-09T21:16:41.701160+0000","last_became_active":"2026-03-09T21:16:16.273281+0000","last_became_peered":"2026-03-09T21:16:16.273281+0000","last_unstale":"2026-03-09T21:16:41.701160+0000","last_undegraded":"2026-03-09T21:16:41.701160+0000","last_fullsized":"2026-03-09T21:16:41.701160+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:10:08.533145+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,4,7],"acting":[3,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.0","version":"62'18","reported_seq":61,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.698691+0000","last_change":"2026-03-09T21:16:12.248035+0000","last_active":"2026-03-09T21:16:41.698691+0000","last_peered":"2026-03-09T21:16:41.698691+0000","last_clean":"2026-03-09T21:16:41.698691+0000","last_became_active":"2026-03-09T21:16:12.247910+0000","last_became_peered":"2026-03-09T21:16:12.247910+0000","last_unstale":"2026-03-09T21:16:41.698691+0000","last_undegraded":"2026-03-09T21:16:41.698691+0000","last_fullsized":"2026-03-09T21:16:41.698691+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":18,"log_dups_size":0,"ondisk_log_size":18,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:36:25.689805+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":34,"num_read_kb":22,"num_write":20,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,6],"acting":[1,2,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.1","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.320448+0000","last_change":"2026-03-09T21:16:10.207522+0000","last_active":"2026-03-09T21:16:42.320448+0000","last_peered":"2026-03-09T21:16:42.320448+0000","last_clean":"2026-03-09T21:16:42.320448+0000","last_became_active":"2026-03-09T21:16:10.207168+0000","last_became_peered":"2026-03-09T21:16:10.207168+0000","last_unstale":"2026-03-09T21:16:42.320448+0000","last_undegraded":"2026-03-09T21:16:42.320448+0000","last_fullsized":"2026-03-09T21:16:42.320448+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:08:27.281888+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,0],"acting":[2,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.6","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.320833+0000","last_change":"2026-03-09T21:16:14.274889+0000","last_active":"2026-03-09T21:16:42.320833+0000","last_peered":"2026-03-09T21:16:42.320833+0000","last_clean":"2026-03-09T21:16:42.320833+0000","last_became_active":"2026-03-09T21:16:14.274365+0000","last_became_peered":"2026-03-09T21:16:14.274365+0000","last_unstale":"2026-03-09T21:16:42.320833+0000","last_undegraded":"2026-03-09T21:16:42.320833+0000","last_fullsized":"2026-03-09T21:16:42.320833+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:22:05.044533+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,7],"acting":[2,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.5","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.325440+0000","last_change":"2026-03-09T21:16:17.152082+0000","last_active":"2026-03-09T21:16:42.325440+0000","last_peered":"2026-03-09T21:16:42.325440+0000","last_clean":"2026-03-09T21:16:42.325440+0000","last_became_active":"2026-03-09T21:16:17.151462+0000","last_became_peered":"2026-03-09T21:16:17.151462+0000","last_unstale":"2026-03-09T21:16:42.325440+0000","last_undegraded":"2026-03-09T21:16:42.325440+0000","last_fullsized":"2026-03-09T21:16:42.325440+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:06:05.628662+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,3],"acting":[7,6,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1","version":"62'14","reported_seq":50,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.226364+0000","last_change":"2026-03-09T21:16:12.254210+0000","last_active":"2026-03-09T21:16:42.226364+0000","last_peered":"2026-03-09T21:16:42.226364+0000","last_clean":"2026-03-09T21:16:42.226364+0000","last_became_active":"2026-03-09T21:16:12.254069+0000","last_became_peered":"2026-03-09T21:16:12.254069+0000","last_unstale":"2026-03-09T21:16:42.226364+0000","last_undegraded":"2026-03-09T21:16:42.226364+0000","last_fullsized":"2026-03-09T21:16:42.226364+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":14,"log_dups_size":0,"ondisk_log_size":14,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:11:27.225400+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":21,"num_read_kb":14,"num_write":14,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.0","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.323299+0000","last_change":"2026-03-09T21:16:10.226636+0000","last_active":"2026-03-09T21:16:42.323299+0000","last_peered":"2026-03-09T21:16:42.323299+0000","last_clean":"2026-03-09T21:16:42.323299+0000","last_became_active":"2026-03-09T21:16:10.222626+0000","last_became_peered":"2026-03-09T21:16:10.222626+0000","last_unstale":"2026-03-09T21:16:42.323299+0000","last_undegraded":"2026-03-09T21:16:42.323299+0000","last_fullsized":"2026-03-09T21:16:42.323299+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:24:56.416965+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,1,0],"acting":[7,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.7","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.321982+0000","last_change":"2026-03-09T21:16:14.278782+0000","last_active":"2026-03-09T21:16:42.321982+0000","last_peered":"2026-03-09T21:16:42.321982+0000","last_clean":"2026-03-09T21:16:42.321982+0000","last_became_active":"2026-03-09T21:16:14.278576+0000","last_became_peered":"2026-03-09T21:16:14.278576+0000","last_unstale":"2026-03-09T21:16:42.321982+0000","last_undegraded":"2026-03-09T21:16:42.321982+0000","last_fullsized":"2026-03-09T21:16:42.321982+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:13:29.774246+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.4","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.699453+0000","last_change":"2026-03-09T21:16:16.267326+0000","last_active":"2026-03-09T21:16:41.699453+0000","last_peered":"2026-03-09T21:16:41.699453+0000","last_clean":"2026-03-09T21:16:41.699453+0000","last_became_active":"2026-03-09T21:16:16.267207+0000","last_became_peered":"2026-03-09T21:16:16.267207+0000","last_unstale":"2026-03-09T21:16:41.699453+0000","last_undegraded":"2026-03-09T21:16:41.699453+0000","last_fullsized":"2026-03-09T21:16:41.699453+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:25:14.739165+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.2","version":"62'10","reported_seq":44,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.702839+0000","last_change":"2026-03-09T21:16:12.240616+0000","last_active":"2026-03-09T21:16:41.702839+0000","last_peered":"2026-03-09T21:16:41.702839+0000","last_clean":"2026-03-09T21:16:41.702839+0000","last_became_active":"2026-03-09T21:16:12.240395+0000","last_became_peered":"2026-03-09T21:16:12.240395+0000","last_unstale":"2026-03-09T21:16:41.702839+0000","last_undegraded":"2026-03-09T21:16:41.702839+0000","last_fullsized":"2026-03-09T21:16:41.702839+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:54:16.271756+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5,6],"acting":[3,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.3","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.320912+0000","last_change":"2026-03-09T21:16:10.227723+0000","last_active":"2026-03-09T21:16:42.320912+0000","last_peered":"2026-03-09T21:16:42.320912+0000","last_clean":"2026-03-09T21:16:42.320912+0000","last_became_active":"2026-03-09T21:16:10.227546+0000","last_became_peered":"2026-03-09T21:16:10.227546+0000","last_unstale":"2026-03-09T21:16:42.320912+0000","last_undegraded":"2026-03-09T21:16:42.320912+0000","last_fullsized":"2026-03-09T21:16:42.320912+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:55:31.731415+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,2,7],"acting":[5,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"1.0","version":"66'39","reported_seq":68,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:43.784267+0000","last_change":"2026-03-09T21:15:49.169775+0000","last_active":"2026-03-09T21:16:43.784267+0000","last_peered":"2026-03-09T21:16:43.784267+0000","last_clean":"2026-03-09T21:16:43.784267+0000","last_became_active":"2026-03-09T21:15:49.162649+0000","last_became_peered":"2026-03-09T21:15:49.162649+0000","last_unstale":"2026-03-09T21:16:43.784267+0000","last_undegraded":"2026-03-09T21:16:43.784267+0000","last_fullsized":"2026-03-09T21:16:43.784267+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":19,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:12:51.208803+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:12:51.208803+0000","last_clean_scrub_stamp":"2026-03-09T21:12:51.208803+0000","objects_scrubbed":0,"log_size":39,"log_dups_size":0,"ondisk_log_size":39,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:57:59.142300+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":106,"num_read_kb":213,"num_write":69,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,6],"acting":[7,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.4","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.324850+0000","last_change":"2026-03-09T21:16:14.287882+0000","last_active":"2026-03-09T21:16:42.324850+0000","last_peered":"2026-03-09T21:16:42.324850+0000","last_clean":"2026-03-09T21:16:42.324850+0000","last_became_active":"2026-03-09T21:16:14.287549+0000","last_became_peered":"2026-03-09T21:16:14.287549+0000","last_unstale":"2026-03-09T21:16:42.324850+0000","last_undegraded":"2026-03-09T21:16:42.324850+0000","last_fullsized":"2026-03-09T21:16:42.324850+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:15:17.664068+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,5],"acting":[7,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.7","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.320841+0000","last_change":"2026-03-09T21:16:16.273012+0000","last_active":"2026-03-09T21:16:42.320841+0000","last_peered":"2026-03-09T21:16:42.320841+0000","last_clean":"2026-03-09T21:16:42.320841+0000","last_became_active":"2026-03-09T21:16:16.272927+0000","last_became_peered":"2026-03-09T21:16:16.272927+0000","last_unstale":"2026-03-09T21:16:42.320841+0000","last_undegraded":"2026-03-09T21:16:42.320841+0000","last_fullsized":"2026-03-09T21:16:42.320841+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:32:44.469965+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,4],"acting":[5,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.d","version":"62'17","reported_seq":57,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.323882+0000","last_change":"2026-03-09T21:16:12.232337+0000","last_active":"2026-03-09T21:16:42.323882+0000","last_peered":"2026-03-09T21:16:42.323882+0000","last_clean":"2026-03-09T21:16:42.323882+0000","last_became_active":"2026-03-09T21:16:12.232091+0000","last_became_peered":"2026-03-09T21:16:12.232091+0000","last_unstale":"2026-03-09T21:16:42.323882+0000","last_undegraded":"2026-03-09T21:16:42.323882+0000","last_fullsized":"2026-03-09T21:16:42.323882+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":17,"log_dups_size":0,"ondisk_log_size":17,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:29:03.818724+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":29,"num_read_kb":19,"num_write":18,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,6],"acting":[7,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.c","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.320946+0000","last_change":"2026-03-09T21:16:10.207431+0000","last_active":"2026-03-09T21:16:42.320946+0000","last_peered":"2026-03-09T21:16:42.320946+0000","last_clean":"2026-03-09T21:16:42.320946+0000","last_became_active":"2026-03-09T21:16:10.207019+0000","last_became_peered":"2026-03-09T21:16:10.207019+0000","last_unstale":"2026-03-09T21:16:42.320946+0000","last_undegraded":"2026-03-09T21:16:42.320946+0000","last_fullsized":"2026-03-09T21:16:42.320946+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:03:30.578746+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,0],"acting":[2,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.b","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.320975+0000","last_change":"2026-03-09T21:16:14.261145+0000","last_active":"2026-03-09T21:16:42.320975+0000","last_peered":"2026-03-09T21:16:42.320975+0000","last_clean":"2026-03-09T21:16:42.320975+0000","last_became_active":"2026-03-09T21:16:14.261015+0000","last_became_peered":"2026-03-09T21:16:14.261015+0000","last_unstale":"2026-03-09T21:16:42.320975+0000","last_undegraded":"2026-03-09T21:16:42.320975+0000","last_fullsized":"2026-03-09T21:16:42.320975+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:08:04.549744+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,5],"acting":[2,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.8","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.323920+0000","last_change":"2026-03-09T21:16:16.267032+0000","last_active":"2026-03-09T21:16:42.323920+0000","last_peered":"2026-03-09T21:16:42.323920+0000","last_clean":"2026-03-09T21:16:42.323920+0000","last_became_active":"2026-03-09T21:16:16.266912+0000","last_became_peered":"2026-03-09T21:16:16.266912+0000","last_unstale":"2026-03-09T21:16:42.323920+0000","last_undegraded":"2026-03-09T21:16:42.323920+0000","last_fullsized":"2026-03-09T21:16:42.323920+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:48:18.921950+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,3],"acting":[7,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.c","version":"62'10","reported_seq":44,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.321738+0000","last_change":"2026-03-09T21:16:12.238570+0000","last_active":"2026-03-09T21:16:42.321738+0000","last_peered":"2026-03-09T21:16:42.321738+0000","last_clean":"2026-03-09T21:16:42.321738+0000","last_became_active":"2026-03-09T21:16:12.238332+0000","last_became_peered":"2026-03-09T21:16:12.238332+0000","last_unstale":"2026-03-09T21:16:42.321738+0000","last_undegraded":"2026-03-09T21:16:42.321738+0000","last_fullsized":"2026-03-09T21:16:42.321738+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:01:27.938454+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,6],"acting":[5,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.d","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.699548+0000","last_change":"2026-03-09T21:16:10.216661+0000","last_active":"2026-03-09T21:16:41.699548+0000","last_peered":"2026-03-09T21:16:41.699548+0000","last_clean":"2026-03-09T21:16:41.699548+0000","last_became_active":"2026-03-09T21:16:10.216548+0000","last_became_peered":"2026-03-09T21:16:10.216548+0000","last_unstale":"2026-03-09T21:16:41.699548+0000","last_undegraded":"2026-03-09T21:16:41.699548+0000","last_fullsized":"2026-03-09T21:16:41.699548+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:32:06.723542+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,3],"acting":[1,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.a","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.321230+0000","last_change":"2026-03-09T21:16:14.264873+0000","last_active":"2026-03-09T21:16:42.321230+0000","last_peered":"2026-03-09T21:16:42.321230+0000","last_clean":"2026-03-09T21:16:42.321230+0000","last_became_active":"2026-03-09T21:16:14.264645+0000","last_became_peered":"2026-03-09T21:16:14.264645+0000","last_unstale":"2026-03-09T21:16:42.321230+0000","last_undegraded":"2026-03-09T21:16:42.321230+0000","last_fullsized":"2026-03-09T21:16:42.321230+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:26:37.126300+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,3],"acting":[2,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.9","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.225333+0000","last_change":"2026-03-09T21:16:16.275373+0000","last_active":"2026-03-09T21:16:42.225333+0000","last_peered":"2026-03-09T21:16:42.225333+0000","last_clean":"2026-03-09T21:16:42.225333+0000","last_became_active":"2026-03-09T21:16:16.270182+0000","last_became_peered":"2026-03-09T21:16:16.270182+0000","last_unstale":"2026-03-09T21:16:42.225333+0000","last_undegraded":"2026-03-09T21:16:42.225333+0000","last_fullsized":"2026-03-09T21:16:42.225333+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:12:21.691688+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.f","version":"62'15","reported_seq":54,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.324449+0000","last_change":"2026-03-09T21:16:12.245451+0000","last_active":"2026-03-09T21:16:42.324449+0000","last_peered":"2026-03-09T21:16:42.324449+0000","last_clean":"2026-03-09T21:16:42.324449+0000","last_became_active":"2026-03-09T21:16:12.245342+0000","last_became_peered":"2026-03-09T21:16:12.245342+0000","last_unstale":"2026-03-09T21:16:42.324449+0000","last_undegraded":"2026-03-09T21:16:42.324449+0000","last_fullsized":"2026-03-09T21:16:42.324449+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:50:26.299246+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,0],"acting":[7,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.e","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.320494+0000","last_change":"2026-03-09T21:16:10.223307+0000","last_active":"2026-03-09T21:16:42.320494+0000","last_peered":"2026-03-09T21:16:42.320494+0000","last_clean":"2026-03-09T21:16:42.320494+0000","last_became_active":"2026-03-09T21:16:10.223100+0000","last_became_peered":"2026-03-09T21:16:10.223100+0000","last_unstale":"2026-03-09T21:16:42.320494+0000","last_undegraded":"2026-03-09T21:16:42.320494+0000","last_fullsized":"2026-03-09T21:16:42.320494+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:31:09.474052+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,7],"acting":[2,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.9","version":"63'11","reported_seq":52,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:17:25.294636+0000","last_change":"2026-03-09T21:16:14.287852+0000","last_active":"2026-03-09T21:17:25.294636+0000","last_peered":"2026-03-09T21:17:25.294636+0000","last_clean":"2026-03-09T21:17:25.294636+0000","last_became_active":"2026-03-09T21:16:14.287754+0000","last_became_peered":"2026-03-09T21:16:14.287754+0000","last_unstale":"2026-03-09T21:17:25.294636+0000","last_undegraded":"2026-03-09T21:17:25.294636+0000","last_fullsized":"2026-03-09T21:17:25.294636+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:23:06.291274+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.a","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.320647+0000","last_change":"2026-03-09T21:16:17.158049+0000","last_active":"2026-03-09T21:16:42.320647+0000","last_peered":"2026-03-09T21:16:42.320647+0000","last_clean":"2026-03-09T21:16:42.320647+0000","last_became_active":"2026-03-09T21:16:17.157829+0000","last_became_peered":"2026-03-09T21:16:17.157829+0000","last_unstale":"2026-03-09T21:16:42.320647+0000","last_undegraded":"2026-03-09T21:16:42.320647+0000","last_fullsized":"2026-03-09T21:16:42.320647+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:48:32.647715+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,0],"acting":[5,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.e","version":"62'11","reported_seq":48,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.325023+0000","last_change":"2026-03-09T21:16:12.235713+0000","last_active":"2026-03-09T21:16:42.325023+0000","last_peered":"2026-03-09T21:16:42.325023+0000","last_clean":"2026-03-09T21:16:42.325023+0000","last_became_active":"2026-03-09T21:16:12.235577+0000","last_became_peered":"2026-03-09T21:16:12.235577+0000","last_unstale":"2026-03-09T21:16:42.325023+0000","last_undegraded":"2026-03-09T21:16:42.325023+0000","last_fullsized":"2026-03-09T21:16:42.325023+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:43:55.885246+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.f","version":"55'2","reported_seq":49,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.698367+0000","last_change":"2026-03-09T21:16:10.223447+0000","last_active":"2026-03-09T21:16:41.698367+0000","last_peered":"2026-03-09T21:16:41.698367+0000","last_clean":"2026-03-09T21:16:41.698367+0000","last_became_active":"2026-03-09T21:16:10.223290+0000","last_became_peered":"2026-03-09T21:16:10.223290+0000","last_unstale":"2026-03-09T21:16:41.698367+0000","last_undegraded":"2026-03-09T21:16:41.698367+0000","last_fullsized":"2026-03-09T21:16:41.698367+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:13:40.060786+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":92,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":14,"num_read_kb":14,"num_write":4,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,7],"acting":[4,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.8","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.321153+0000","last_change":"2026-03-09T21:16:14.262654+0000","last_active":"2026-03-09T21:16:42.321153+0000","last_peered":"2026-03-09T21:16:42.321153+0000","last_clean":"2026-03-09T21:16:42.321153+0000","last_became_active":"2026-03-09T21:16:14.262264+0000","last_became_peered":"2026-03-09T21:16:14.262264+0000","last_unstale":"2026-03-09T21:16:42.321153+0000","last_undegraded":"2026-03-09T21:16:42.321153+0000","last_fullsized":"2026-03-09T21:16:42.321153+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:05:49.554558+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,1],"acting":[2,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.b","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.701119+0000","last_change":"2026-03-09T21:16:16.266710+0000","last_active":"2026-03-09T21:16:41.701119+0000","last_peered":"2026-03-09T21:16:41.701119+0000","last_clean":"2026-03-09T21:16:41.701119+0000","last_became_active":"2026-03-09T21:16:16.265277+0000","last_became_peered":"2026-03-09T21:16:16.265277+0000","last_unstale":"2026-03-09T21:16:41.701119+0000","last_undegraded":"2026-03-09T21:16:41.701119+0000","last_fullsized":"2026-03-09T21:16:41.701119+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:49:53.118319+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.11","version":"62'11","reported_seq":48,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.323799+0000","last_change":"2026-03-09T21:16:12.232405+0000","last_active":"2026-03-09T21:16:42.323799+0000","last_peered":"2026-03-09T21:16:42.323799+0000","last_clean":"2026-03-09T21:16:42.323799+0000","last_became_active":"2026-03-09T21:16:12.232253+0000","last_became_peered":"2026-03-09T21:16:12.232253+0000","last_unstale":"2026-03-09T21:16:42.323799+0000","last_undegraded":"2026-03-09T21:16:42.323799+0000","last_fullsized":"2026-03-09T21:16:42.323799+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:07:26.360266+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.10","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.320343+0000","last_change":"2026-03-09T21:16:10.207596+0000","last_active":"2026-03-09T21:16:42.320343+0000","last_peered":"2026-03-09T21:16:42.320343+0000","last_clean":"2026-03-09T21:16:42.320343+0000","last_became_active":"2026-03-09T21:16:10.207326+0000","last_became_peered":"2026-03-09T21:16:10.207326+0000","last_unstale":"2026-03-09T21:16:42.320343+0000","last_undegraded":"2026-03-09T21:16:42.320343+0000","last_fullsized":"2026-03-09T21:16:42.320343+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:32:58.852705+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1,0],"acting":[2,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.17","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.707640+0000","last_change":"2026-03-09T21:16:14.275288+0000","last_active":"2026-03-09T21:16:41.707640+0000","last_peered":"2026-03-09T21:16:41.707640+0000","last_clean":"2026-03-09T21:16:41.707640+0000","last_became_active":"2026-03-09T21:16:14.272967+0000","last_became_peered":"2026-03-09T21:16:14.272967+0000","last_unstale":"2026-03-09T21:16:41.707640+0000","last_undegraded":"2026-03-09T21:16:41.707640+0000","last_fullsized":"2026-03-09T21:16:41.707640+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:25:29.357834+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.14","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.320890+0000","last_change":"2026-03-09T21:16:16.274693+0000","last_active":"2026-03-09T21:16:42.320890+0000","last_peered":"2026-03-09T21:16:42.320890+0000","last_clean":"2026-03-09T21:16:42.320890+0000","last_became_active":"2026-03-09T21:16:16.274491+0000","last_became_peered":"2026-03-09T21:16:16.274491+0000","last_unstale":"2026-03-09T21:16:42.320890+0000","last_undegraded":"2026-03-09T21:16:42.320890+0000","last_fullsized":"2026-03-09T21:16:42.320890+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:20:32.759919+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,7],"acting":[2,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"3.10","version":"62'4","reported_seq":35,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.213977+0000","last_change":"2026-03-09T21:16:12.248075+0000","last_active":"2026-03-09T21:16:42.213977+0000","last_peered":"2026-03-09T21:16:42.213977+0000","last_clean":"2026-03-09T21:16:42.213977+0000","last_became_active":"2026-03-09T21:16:12.241397+0000","last_became_peered":"2026-03-09T21:16:12.241397+0000","last_unstale":"2026-03-09T21:16:42.213977+0000","last_undegraded":"2026-03-09T21:16:42.213977+0000","last_fullsized":"2026-03-09T21:16:42.213977+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":4,"log_dups_size":0,"ondisk_log_size":4,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:35:28.009360+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":6,"num_read_kb":4,"num_write":4,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"2.11","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.213966+0000","last_change":"2026-03-09T21:16:10.218156+0000","last_active":"2026-03-09T21:16:42.213966+0000","last_peered":"2026-03-09T21:16:42.213966+0000","last_clean":"2026-03-09T21:16:42.213966+0000","last_became_active":"2026-03-09T21:16:10.217929+0000","last_became_peered":"2026-03-09T21:16:10.217929+0000","last_unstale":"2026-03-09T21:16:42.213966+0000","last_undegraded":"2026-03-09T21:16:42.213966+0000","last_fullsized":"2026-03-09T21:16:42.213966+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:30:32.611575+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.16","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.322042+0000","last_change":"2026-03-09T21:16:14.283836+0000","last_active":"2026-03-09T21:16:42.322042+0000","last_peered":"2026-03-09T21:16:42.322042+0000","last_clean":"2026-03-09T21:16:42.322042+0000","last_became_active":"2026-03-09T21:16:14.268301+0000","last_became_peered":"2026-03-09T21:16:14.268301+0000","last_unstale":"2026-03-09T21:16:42.322042+0000","last_undegraded":"2026-03-09T21:16:42.322042+0000","last_fullsized":"2026-03-09T21:16:42.322042+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:05:15.370994+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,1],"acting":[5,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.15","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.325512+0000","last_change":"2026-03-09T21:16:17.154283+0000","last_active":"2026-03-09T21:16:42.325512+0000","last_peered":"2026-03-09T21:16:42.325512+0000","last_clean":"2026-03-09T21:16:42.325512+0000","last_became_active":"2026-03-09T21:16:17.154172+0000","last_became_peered":"2026-03-09T21:16:17.154172+0000","last_unstale":"2026-03-09T21:16:42.325512+0000","last_undegraded":"2026-03-09T21:16:42.325512+0000","last_fullsized":"2026-03-09T21:16:42.325512+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:28:07.199857+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.13","version":"62'11","reported_seq":48,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.324230+0000","last_change":"2026-03-09T21:16:12.224697+0000","last_active":"2026-03-09T21:16:42.324230+0000","last_peered":"2026-03-09T21:16:42.324230+0000","last_clean":"2026-03-09T21:16:42.324230+0000","last_became_active":"2026-03-09T21:16:12.224433+0000","last_became_peered":"2026-03-09T21:16:12.224433+0000","last_unstale":"2026-03-09T21:16:42.324230+0000","last_undegraded":"2026-03-09T21:16:42.324230+0000","last_fullsized":"2026-03-09T21:16:42.324230+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:06:15.953602+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,2],"acting":[7,4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.12","version":"55'1","reported_seq":41,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.320427+0000","last_change":"2026-03-09T21:16:10.227999+0000","last_active":"2026-03-09T21:16:42.320427+0000","last_peered":"2026-03-09T21:16:42.320427+0000","last_clean":"2026-03-09T21:16:42.320427+0000","last_became_active":"2026-03-09T21:16:10.227846+0000","last_became_peered":"2026-03-09T21:16:10.227846+0000","last_unstale":"2026-03-09T21:16:42.320427+0000","last_undegraded":"2026-03-09T21:16:42.320427+0000","last_fullsized":"2026-03-09T21:16:42.320427+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:03:25.824965+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":436,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,7],"acting":[5,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.15","version":"63'11","reported_seq":55,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:17:25.294649+0000","last_change":"2026-03-09T21:16:14.278691+0000","last_active":"2026-03-09T21:17:25.294649+0000","last_peered":"2026-03-09T21:17:25.294649+0000","last_clean":"2026-03-09T21:17:25.294649+0000","last_became_active":"2026-03-09T21:16:14.278362+0000","last_became_peered":"2026-03-09T21:16:14.278362+0000","last_unstale":"2026-03-09T21:17:25.294649+0000","last_undegraded":"2026-03-09T21:17:25.294649+0000","last_fullsized":"2026-03-09T21:17:25.294649+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:12:14.733604+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.16","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.225497+0000","last_change":"2026-03-09T21:16:16.276544+0000","last_active":"2026-03-09T21:16:42.225497+0000","last_peered":"2026-03-09T21:16:42.225497+0000","last_clean":"2026-03-09T21:16:42.225497+0000","last_became_active":"2026-03-09T21:16:16.276443+0000","last_became_peered":"2026-03-09T21:16:16.276443+0000","last_unstale":"2026-03-09T21:16:42.225497+0000","last_undegraded":"2026-03-09T21:16:42.225497+0000","last_fullsized":"2026-03-09T21:16:42.225497+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:21:56.957892+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.12","version":"62'9","reported_seq":45,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.226036+0000","last_change":"2026-03-09T21:16:12.253451+0000","last_active":"2026-03-09T21:16:42.226036+0000","last_peered":"2026-03-09T21:16:42.226036+0000","last_clean":"2026-03-09T21:16:42.226036+0000","last_became_active":"2026-03-09T21:16:12.252676+0000","last_became_peered":"2026-03-09T21:16:12.252676+0000","last_unstale":"2026-03-09T21:16:42.226036+0000","last_undegraded":"2026-03-09T21:16:42.226036+0000","last_fullsized":"2026-03-09T21:16:42.226036+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:10:28.599615+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.13","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.226787+0000","last_change":"2026-03-09T21:16:10.216983+0000","last_active":"2026-03-09T21:16:42.226787+0000","last_peered":"2026-03-09T21:16:42.226787+0000","last_clean":"2026-03-09T21:16:42.226787+0000","last_became_active":"2026-03-09T21:16:10.216656+0000","last_became_peered":"2026-03-09T21:16:10.216656+0000","last_unstale":"2026-03-09T21:16:42.226787+0000","last_undegraded":"2026-03-09T21:16:42.226787+0000","last_fullsized":"2026-03-09T21:16:42.226787+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:34:53.806330+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.14","version":"63'11","reported_seq":52,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:17:25.294518+0000","last_change":"2026-03-09T21:16:14.277022+0000","last_active":"2026-03-09T21:17:25.294518+0000","last_peered":"2026-03-09T21:17:25.294518+0000","last_clean":"2026-03-09T21:17:25.294518+0000","last_became_active":"2026-03-09T21:16:14.273186+0000","last_became_peered":"2026-03-09T21:16:14.273186+0000","last_unstale":"2026-03-09T21:17:25.294518+0000","last_undegraded":"2026-03-09T21:17:25.294518+0000","last_fullsized":"2026-03-09T21:17:25.294518+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:52:00.481587+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,2],"acting":[3,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.17","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.697711+0000","last_change":"2026-03-09T21:16:16.276694+0000","last_active":"2026-03-09T21:16:41.697711+0000","last_peered":"2026-03-09T21:16:41.697711+0000","last_clean":"2026-03-09T21:16:41.697711+0000","last_became_active":"2026-03-09T21:16:16.275869+0000","last_became_peered":"2026-03-09T21:16:16.275869+0000","last_unstale":"2026-03-09T21:16:41.697711+0000","last_undegraded":"2026-03-09T21:16:41.697711+0000","last_fullsized":"2026-03-09T21:16:41.697711+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:23:27.691544+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,5],"acting":[4,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.15","version":"62'9","reported_seq":45,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.323787+0000","last_change":"2026-03-09T21:16:12.225161+0000","last_active":"2026-03-09T21:16:42.323787+0000","last_peered":"2026-03-09T21:16:42.323787+0000","last_clean":"2026-03-09T21:16:42.323787+0000","last_became_active":"2026-03-09T21:16:12.224534+0000","last_became_peered":"2026-03-09T21:16:12.224534+0000","last_unstale":"2026-03-09T21:16:42.323787+0000","last_undegraded":"2026-03-09T21:16:42.323787+0000","last_fullsized":"2026-03-09T21:16:42.323787+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:04:38.541569+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,3,4],"acting":[7,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.14","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.213565+0000","last_change":"2026-03-09T21:16:10.212267+0000","last_active":"2026-03-09T21:16:42.213565+0000","last_peered":"2026-03-09T21:16:42.213565+0000","last_clean":"2026-03-09T21:16:42.213565+0000","last_became_active":"2026-03-09T21:16:10.211916+0000","last_became_peered":"2026-03-09T21:16:10.211916+0000","last_unstale":"2026-03-09T21:16:42.213565+0000","last_undegraded":"2026-03-09T21:16:42.213565+0000","last_fullsized":"2026-03-09T21:16:42.213565+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:05:54.459284+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,3,5],"acting":[6,3,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.13","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.701642+0000","last_change":"2026-03-09T21:16:14.263603+0000","last_active":"2026-03-09T21:16:41.701642+0000","last_peered":"2026-03-09T21:16:41.701642+0000","last_clean":"2026-03-09T21:16:41.701642+0000","last_became_active":"2026-03-09T21:16:14.263413+0000","last_became_peered":"2026-03-09T21:16:14.263413+0000","last_unstale":"2026-03-09T21:16:41.701642+0000","last_undegraded":"2026-03-09T21:16:41.701642+0000","last_fullsized":"2026-03-09T21:16:41.701642+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:31:16.356656+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.10","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.225571+0000","last_change":"2026-03-09T21:16:16.279437+0000","last_active":"2026-03-09T21:16:42.225571+0000","last_peered":"2026-03-09T21:16:42.225571+0000","last_clean":"2026-03-09T21:16:42.225571+0000","last_became_active":"2026-03-09T21:16:16.279348+0000","last_became_peered":"2026-03-09T21:16:16.279348+0000","last_unstale":"2026-03-09T21:16:42.225571+0000","last_undegraded":"2026-03-09T21:16:42.225571+0000","last_fullsized":"2026-03-09T21:16:42.225571+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:34:06.442063+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,1],"acting":[0,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.14","version":"62'10","reported_seq":44,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.697249+0000","last_change":"2026-03-09T21:16:12.239617+0000","last_active":"2026-03-09T21:16:41.697249+0000","last_peered":"2026-03-09T21:16:41.697249+0000","last_clean":"2026-03-09T21:16:41.697249+0000","last_became_active":"2026-03-09T21:16:12.239416+0000","last_became_peered":"2026-03-09T21:16:12.239416+0000","last_unstale":"2026-03-09T21:16:41.697249+0000","last_undegraded":"2026-03-09T21:16:41.697249+0000","last_fullsized":"2026-03-09T21:16:41.697249+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:56:20.827013+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,7,6],"acting":[4,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.15","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.699313+0000","last_change":"2026-03-09T21:16:10.210770+0000","last_active":"2026-03-09T21:16:41.699313+0000","last_peered":"2026-03-09T21:16:41.699313+0000","last_clean":"2026-03-09T21:16:41.699313+0000","last_became_active":"2026-03-09T21:16:10.210687+0000","last_became_peered":"2026-03-09T21:16:10.210687+0000","last_unstale":"2026-03-09T21:16:41.699313+0000","last_undegraded":"2026-03-09T21:16:41.699313+0000","last_fullsized":"2026-03-09T21:16:41.699313+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:36:49.951697+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,0],"acting":[1,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.12","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.699270+0000","last_change":"2026-03-09T21:16:14.269683+0000","last_active":"2026-03-09T21:16:41.699270+0000","last_peered":"2026-03-09T21:16:41.699270+0000","last_clean":"2026-03-09T21:16:41.699270+0000","last_became_active":"2026-03-09T21:16:14.269254+0000","last_became_peered":"2026-03-09T21:16:14.269254+0000","last_unstale":"2026-03-09T21:16:41.699270+0000","last_undegraded":"2026-03-09T21:16:41.699270+0000","last_fullsized":"2026-03-09T21:16:41.699270+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:39:25.534484+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.11","version":"62'1","reported_seq":22,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.701205+0000","last_change":"2026-03-09T21:16:16.278021+0000","last_active":"2026-03-09T21:16:41.701205+0000","last_peered":"2026-03-09T21:16:41.701205+0000","last_clean":"2026-03-09T21:16:41.701205+0000","last_became_active":"2026-03-09T21:16:16.277930+0000","last_became_peered":"2026-03-09T21:16:16.277930+0000","last_unstale":"2026-03-09T21:16:41.701205+0000","last_undegraded":"2026-03-09T21:16:41.701205+0000","last_fullsized":"2026-03-09T21:16:41.701205+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:26:15.622259+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":13,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.17","version":"62'6","reported_seq":38,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.226466+0000","last_change":"2026-03-09T21:16:12.253857+0000","last_active":"2026-03-09T21:16:42.226466+0000","last_peered":"2026-03-09T21:16:42.226466+0000","last_clean":"2026-03-09T21:16:42.226466+0000","last_became_active":"2026-03-09T21:16:12.253608+0000","last_became_peered":"2026-03-09T21:16:12.253608+0000","last_unstale":"2026-03-09T21:16:42.226466+0000","last_undegraded":"2026-03-09T21:16:42.226466+0000","last_fullsized":"2026-03-09T21:16:42.226466+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":6,"log_dups_size":0,"ondisk_log_size":6,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:55:50.922798+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":9,"num_read_kb":6,"num_write":6,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,3],"acting":[0,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.16","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.321426+0000","last_change":"2026-03-09T21:16:10.211010+0000","last_active":"2026-03-09T21:16:42.321426+0000","last_peered":"2026-03-09T21:16:42.321426+0000","last_clean":"2026-03-09T21:16:42.321426+0000","last_became_active":"2026-03-09T21:16:10.210768+0000","last_became_peered":"2026-03-09T21:16:10.210768+0000","last_unstale":"2026-03-09T21:16:42.321426+0000","last_undegraded":"2026-03-09T21:16:42.321426+0000","last_fullsized":"2026-03-09T21:16:42.321426+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:14:45.154745+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,2],"acting":[5,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.11","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.213283+0000","last_change":"2026-03-09T21:16:14.279508+0000","last_active":"2026-03-09T21:16:42.213283+0000","last_peered":"2026-03-09T21:16:42.213283+0000","last_clean":"2026-03-09T21:16:42.213283+0000","last_became_active":"2026-03-09T21:16:14.279292+0000","last_became_peered":"2026-03-09T21:16:14.279292+0000","last_unstale":"2026-03-09T21:16:42.213283+0000","last_undegraded":"2026-03-09T21:16:42.213283+0000","last_fullsized":"2026-03-09T21:16:42.213283+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:56:03.989342+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.12","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.325327+0000","last_change":"2026-03-09T21:16:16.275100+0000","last_active":"2026-03-09T21:16:42.325327+0000","last_peered":"2026-03-09T21:16:42.325327+0000","last_clean":"2026-03-09T21:16:42.325327+0000","last_became_active":"2026-03-09T21:16:16.274971+0000","last_became_peered":"2026-03-09T21:16:16.274971+0000","last_unstale":"2026-03-09T21:16:42.325327+0000","last_undegraded":"2026-03-09T21:16:42.325327+0000","last_fullsized":"2026-03-09T21:16:42.325327+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:49:30.051853+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,4],"acting":[7,2,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.16","version":"62'9","reported_seq":45,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.321011+0000","last_change":"2026-03-09T21:16:12.233802+0000","last_active":"2026-03-09T21:16:42.321011+0000","last_peered":"2026-03-09T21:16:42.321011+0000","last_clean":"2026-03-09T21:16:42.321011+0000","last_became_active":"2026-03-09T21:16:12.233655+0000","last_became_peered":"2026-03-09T21:16:12.233655+0000","last_unstale":"2026-03-09T21:16:42.321011+0000","last_undegraded":"2026-03-09T21:16:42.321011+0000","last_fullsized":"2026-03-09T21:16:42.321011+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:32:54.151455+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,1],"acting":[5,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.17","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.213900+0000","last_change":"2026-03-09T21:16:10.206014+0000","last_active":"2026-03-09T21:16:42.213900+0000","last_peered":"2026-03-09T21:16:42.213900+0000","last_clean":"2026-03-09T21:16:42.213900+0000","last_became_active":"2026-03-09T21:16:10.205791+0000","last_became_peered":"2026-03-09T21:16:10.205791+0000","last_unstale":"2026-03-09T21:16:42.213900+0000","last_undegraded":"2026-03-09T21:16:42.213900+0000","last_fullsized":"2026-03-09T21:16:42.213900+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:29:29.995942+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,5,2],"acting":[6,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.10","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.325202+0000","last_change":"2026-03-09T21:16:14.287745+0000","last_active":"2026-03-09T21:16:42.325202+0000","last_peered":"2026-03-09T21:16:42.325202+0000","last_clean":"2026-03-09T21:16:42.325202+0000","last_became_active":"2026-03-09T21:16:14.287561+0000","last_became_peered":"2026-03-09T21:16:14.287561+0000","last_unstale":"2026-03-09T21:16:42.325202+0000","last_undegraded":"2026-03-09T21:16:42.325202+0000","last_fullsized":"2026-03-09T21:16:42.325202+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:08:43.548598+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.13","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.701024+0000","last_change":"2026-03-09T21:16:17.156625+0000","last_active":"2026-03-09T21:16:41.701024+0000","last_peered":"2026-03-09T21:16:41.701024+0000","last_clean":"2026-03-09T21:16:41.701024+0000","last_became_active":"2026-03-09T21:16:17.156465+0000","last_became_peered":"2026-03-09T21:16:17.156465+0000","last_unstale":"2026-03-09T21:16:41.701024+0000","last_undegraded":"2026-03-09T21:16:41.701024+0000","last_fullsized":"2026-03-09T21:16:41.701024+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:37:43.243071+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,6],"acting":[3,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.1c","version":"62'1","reported_seq":23,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.324392+0000","last_change":"2026-03-09T21:16:16.264098+0000","last_active":"2026-03-09T21:16:42.324392+0000","last_peered":"2026-03-09T21:16:42.324392+0000","last_clean":"2026-03-09T21:16:42.324392+0000","last_became_active":"2026-03-09T21:16:16.263765+0000","last_became_peered":"2026-03-09T21:16:16.263765+0000","last_unstale":"2026-03-09T21:16:42.324392+0000","last_undegraded":"2026-03-09T21:16:42.324392+0000","last_fullsized":"2026-03-09T21:16:42.324392+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:30:24.005470+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":403,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":3,"num_read_kb":3,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.19","version":"62'15","reported_seq":54,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.699590+0000","last_change":"2026-03-09T21:16:12.241180+0000","last_active":"2026-03-09T21:16:41.699590+0000","last_peered":"2026-03-09T21:16:41.699590+0000","last_clean":"2026-03-09T21:16:41.699590+0000","last_became_active":"2026-03-09T21:16:12.241057+0000","last_became_peered":"2026-03-09T21:16:12.241057+0000","last_unstale":"2026-03-09T21:16:41.699590+0000","last_undegraded":"2026-03-09T21:16:41.699590+0000","last_fullsized":"2026-03-09T21:16:41.699590+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:09:07.125323+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.18","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.320378+0000","last_change":"2026-03-09T21:16:10.228232+0000","last_active":"2026-03-09T21:16:42.320378+0000","last_peered":"2026-03-09T21:16:42.320378+0000","last_clean":"2026-03-09T21:16:42.320378+0000","last_became_active":"2026-03-09T21:16:10.227447+0000","last_became_peered":"2026-03-09T21:16:10.227447+0000","last_unstale":"2026-03-09T21:16:42.320378+0000","last_undegraded":"2026-03-09T21:16:42.320378+0000","last_fullsized":"2026-03-09T21:16:42.320378+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:01:56.759266+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,7],"acting":[5,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1f","version":"63'11","reported_seq":55,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:17:25.294240+0000","last_change":"2026-03-09T21:16:14.274489+0000","last_active":"2026-03-09T21:17:25.294240+0000","last_peered":"2026-03-09T21:17:25.294240+0000","last_clean":"2026-03-09T21:17:25.294240+0000","last_became_active":"2026-03-09T21:16:14.273294+0000","last_became_peered":"2026-03-09T21:16:14.273294+0000","last_unstale":"2026-03-09T21:17:25.294240+0000","last_undegraded":"2026-03-09T21:17:25.294240+0000","last_fullsized":"2026-03-09T21:17:25.294240+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:59:31.884899+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1d","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.705100+0000","last_change":"2026-03-09T21:16:16.272426+0000","last_active":"2026-03-09T21:16:41.705100+0000","last_peered":"2026-03-09T21:16:41.705100+0000","last_clean":"2026-03-09T21:16:41.705100+0000","last_became_active":"2026-03-09T21:16:16.272161+0000","last_became_peered":"2026-03-09T21:16:16.272161+0000","last_unstale":"2026-03-09T21:16:41.705100+0000","last_undegraded":"2026-03-09T21:16:41.705100+0000","last_fullsized":"2026-03-09T21:16:41.705100+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:35:40.987488+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.18","version":"62'9","reported_seq":45,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.707217+0000","last_change":"2026-03-09T21:16:12.242926+0000","last_active":"2026-03-09T21:16:41.707217+0000","last_peered":"2026-03-09T21:16:41.707217+0000","last_clean":"2026-03-09T21:16:41.707217+0000","last_became_active":"2026-03-09T21:16:12.241568+0000","last_became_peered":"2026-03-09T21:16:12.241568+0000","last_unstale":"2026-03-09T21:16:41.707217+0000","last_undegraded":"2026-03-09T21:16:41.707217+0000","last_fullsized":"2026-03-09T21:16:41.707217+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:43:13.793074+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.19","version":"55'1","reported_seq":34,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.707157+0000","last_change":"2026-03-09T21:16:10.209692+0000","last_active":"2026-03-09T21:16:41.707157+0000","last_peered":"2026-03-09T21:16:41.707157+0000","last_clean":"2026-03-09T21:16:41.707157+0000","last_became_active":"2026-03-09T21:16:10.209312+0000","last_became_peered":"2026-03-09T21:16:10.209312+0000","last_unstale":"2026-03-09T21:16:41.707157+0000","last_undegraded":"2026-03-09T21:16:41.707157+0000","last_fullsized":"2026-03-09T21:16:41.707157+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:17:44.302980+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,0],"acting":[3,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1e","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.226079+0000","last_change":"2026-03-09T21:16:14.277807+0000","last_active":"2026-03-09T21:16:42.226079+0000","last_peered":"2026-03-09T21:16:42.226079+0000","last_clean":"2026-03-09T21:16:42.226079+0000","last_became_active":"2026-03-09T21:16:14.277483+0000","last_became_peered":"2026-03-09T21:16:14.277483+0000","last_unstale":"2026-03-09T21:16:42.226079+0000","last_undegraded":"2026-03-09T21:16:42.226079+0000","last_fullsized":"2026-03-09T21:16:42.226079+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:39:34.949383+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.1e","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.698055+0000","last_change":"2026-03-09T21:16:17.156266+0000","last_active":"2026-03-09T21:16:41.698055+0000","last_peered":"2026-03-09T21:16:41.698055+0000","last_clean":"2026-03-09T21:16:41.698055+0000","last_became_active":"2026-03-09T21:16:17.156114+0000","last_became_peered":"2026-03-09T21:16:17.156114+0000","last_unstale":"2026-03-09T21:16:41.698055+0000","last_undegraded":"2026-03-09T21:16:41.698055+0000","last_fullsized":"2026-03-09T21:16:41.698055+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:53:58.467754+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,5],"acting":[4,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1a","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.213694+0000","last_change":"2026-03-09T21:16:10.225538+0000","last_active":"2026-03-09T21:16:42.213694+0000","last_peered":"2026-03-09T21:16:42.213694+0000","last_clean":"2026-03-09T21:16:42.213694+0000","last_became_active":"2026-03-09T21:16:10.224946+0000","last_became_peered":"2026-03-09T21:16:10.224946+0000","last_unstale":"2026-03-09T21:16:42.213694+0000","last_undegraded":"2026-03-09T21:16:42.213694+0000","last_fullsized":"2026-03-09T21:16:42.213694+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:57:58.282907+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.1b","version":"62'5","reported_seq":39,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:42.226536+0000","last_change":"2026-03-09T21:16:12.245556+0000","last_active":"2026-03-09T21:16:42.226536+0000","last_peered":"2026-03-09T21:16:42.226536+0000","last_clean":"2026-03-09T21:16:42.226536+0000","last_became_active":"2026-03-09T21:16:12.245445+0000","last_became_peered":"2026-03-09T21:16:12.245445+0000","last_unstale":"2026-03-09T21:16:42.226536+0000","last_undegraded":"2026-03-09T21:16:42.226536+0000","last_fullsized":"2026-03-09T21:16:42.226536+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:11:44.253286+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":11,"num_read_kb":7,"num_write":6,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,7],"acting":[0,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.1d","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.696495+0000","last_change":"2026-03-09T21:16:14.268395+0000","last_active":"2026-03-09T21:16:41.696495+0000","last_peered":"2026-03-09T21:16:41.696495+0000","last_clean":"2026-03-09T21:16:41.696495+0000","last_became_active":"2026-03-09T21:16:14.267468+0000","last_became_peered":"2026-03-09T21:16:14.267468+0000","last_unstale":"2026-03-09T21:16:41.696495+0000","last_undegraded":"2026-03-09T21:16:41.696495+0000","last_fullsized":"2026-03-09T21:16:41.696495+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:45:12.224442+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1f","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.702122+0000","last_change":"2026-03-09T21:16:17.158977+0000","last_active":"2026-03-09T21:16:41.702122+0000","last_peered":"2026-03-09T21:16:41.702122+0000","last_clean":"2026-03-09T21:16:41.702122+0000","last_became_active":"2026-03-09T21:16:17.158586+0000","last_became_peered":"2026-03-09T21:16:17.158586+0000","last_unstale":"2026-03-09T21:16:41.702122+0000","last_undegraded":"2026-03-09T21:16:41.702122+0000","last_fullsized":"2026-03-09T21:16:41.702122+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:15.234556+0000","last_clean_scrub_stamp":"2026-03-09T21:16:15.234556+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:42:36.578316+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1b","version":"55'1","reported_seq":41,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.702748+0000","last_change":"2026-03-09T21:16:10.224856+0000","last_active":"2026-03-09T21:16:41.702748+0000","last_peered":"2026-03-09T21:16:41.702748+0000","last_clean":"2026-03-09T21:16:41.702748+0000","last_became_active":"2026-03-09T21:16:10.224702+0000","last_became_peered":"2026-03-09T21:16:10.224702+0000","last_unstale":"2026-03-09T21:16:41.702748+0000","last_undegraded":"2026-03-09T21:16:41.702748+0000","last_fullsized":"2026-03-09T21:16:41.702748+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:09.148205+0000","last_clean_scrub_stamp":"2026-03-09T21:16:09.148205+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:39:29.706215+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":993,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1a","version":"62'9","reported_seq":45,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.698499+0000","last_change":"2026-03-09T21:16:12.234692+0000","last_active":"2026-03-09T21:16:41.698499+0000","last_peered":"2026-03-09T21:16:41.698499+0000","last_clean":"2026-03-09T21:16:41.698499+0000","last_became_active":"2026-03-09T21:16:12.234554+0000","last_became_peered":"2026-03-09T21:16:12.234554+0000","last_unstale":"2026-03-09T21:16:41.698499+0000","last_undegraded":"2026-03-09T21:16:41.698499+0000","last_fullsized":"2026-03-09T21:16:41.698499+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:11.186072+0000","last_clean_scrub_stamp":"2026-03-09T21:16:11.186072+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:04:47.560911+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1c","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-09T21:16:41.698514+0000","last_change":"2026-03-09T21:16:14.265930+0000","last_active":"2026-03-09T21:16:41.698514+0000","last_peered":"2026-03-09T21:16:41.698514+0000","last_clean":"2026-03-09T21:16:41.698514+0000","last_became_active":"2026-03-09T21:16:14.265782+0000","last_became_peered":"2026-03-09T21:16:14.265782+0000","last_unstale":"2026-03-09T21:16:41.698514+0000","last_undegraded":"2026-03-09T21:16:41.698514+0000","last_fullsized":"2026-03-09T21:16:41.698514+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T21:16:13.210067+0000","last_clean_scrub_stamp":"2026-03-09T21:16:13.210067+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:38:08.395343+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]}],"pool_stats":[{"poolid":6,"num_pg":32,"stat_sum":{"num_bytes":416,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":3,"num_read_kb":3,"num_write":3,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1248,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":2,"ondisk_log_size":2,"up":96,"acting":96,"num_store_stats":8},{"poolid":5,"num_pg":32,"stat_sum":{"num_bytes":0,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":88,"ondisk_log_size":88,"up":96,"acting":96,"num_store_stats":8},{"poolid":4,"num_pg":3,"stat_sum":{"num_bytes":408,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":72,"num_read_kb":67,"num_write":6,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1224,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":8,"ondisk_log_size":8,"up":9,"acting":9,"num_store_stats":7},{"poolid":3,"num_pg":32,"stat_sum":{"num_bytes":3702,"num_objects":178,"num_object_clones":0,"num_object_copies":534,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":178,"num_whiteouts":0,"num_read":701,"num_read_kb":458,"num_write":417,"num_write_kb":34,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":417792,"data_stored":11106,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":395,"ondisk_log_size":395,"up":96,"acting":96,"num_store_stats":8},{"poolid":2,"num_pg":32,"stat_sum":{"num_bytes":1613,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":34,"num_read_kb":34,"num_write":10,"num_write_kb":6,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":4839,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":6,"ondisk_log_size":6,"up":96,"acting":96,"num_store_stats":8},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":106,"num_read_kb":213,"num_write":69,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":39,"ondisk_log_size":39,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":7,"up_from":51,"seq":219043332118,"num_pgs":60,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27960,"kb_used_data":1124,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939464,"statfs":{"total":21470642176,"available":21442011136,"internally_reserved":0,"allocated":1150976,"data_stored":716500,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":44,"seq":188978561054,"num_pgs":42,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27924,"kb_used_data":1092,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939500,"statfs":{"total":21470642176,"available":21442048000,"internally_reserved":0,"allocated":1118208,"data_stored":714722,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":36,"seq":154618822693,"num_pgs":53,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27488,"kb_used_data":648,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939936,"statfs":{"total":21470642176,"available":21442494464,"internally_reserved":0,"allocated":663552,"data_stored":255300,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":30,"seq":128849018925,"num_pgs":56,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27516,"kb_used_data":676,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939908,"statfs":{"total":21470642176,"available":21442465792,"internally_reserved":0,"allocated":692224,"data_stored":255394,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":25,"seq":107374182451,"num_pgs":50,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27488,"kb_used_data":648,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939936,"statfs":{"total":21470642176,"available":21442494464,"internally_reserved":0,"allocated":663552,"data_stored":256278,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":18,"seq":77309411387,"num_pgs":38,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27480,"kb_used_data":640,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939944,"statfs":{"total":21470642176,"available":21442502656,"internally_reserved":0,"allocated":655360,"data_stored":254942,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574913,"num_pgs":47,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27492,"kb_used_data":652,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939932,"statfs":{"total":21470642176,"available":21442490368,"internally_reserved":0,"allocated":667648,"data_stored":254780,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738440,"num_pgs":50,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27944,"kb_used_data":1108,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939480,"statfs":{"total":21470642176,"available":21442027520,"internally_reserved":0,"allocated":1134592,"data_stored":714397,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":138,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":46,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":1475,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":138,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":436,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":1085,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":16384,"data_stored":1521,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1320,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":57344,"data_stored":1458,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1282,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":40960,"data_stored":1144,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":1980,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":45056,"data_stored":1172,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":40960,"data_stored":1100,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":61440,"data_stored":1650,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":416,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-09T21:17:35.274 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-09T21:17:35.274 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-09T21:17:35.274 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-09T21:17:35.274 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph health --format=json 2026-03-09T21:17:36.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:35 vm07 bash[28052]: audit 2026-03-09T21:17:35.197316+0000 mgr.y (mgr.24416) 67 : audit [DBG] from='client.24542 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T21:17:36.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:35 vm07 bash[28052]: audit 2026-03-09T21:17:35.197316+0000 mgr.y (mgr.24416) 67 : audit [DBG] from='client.24542 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T21:17:36.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:35 vm07 bash[28052]: cluster 2026-03-09T21:17:35.732010+0000 mgr.y (mgr.24416) 68 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:36.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:35 vm07 bash[28052]: cluster 2026-03-09T21:17:35.732010+0000 mgr.y (mgr.24416) 68 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:36.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:35 vm07 bash[20771]: audit 2026-03-09T21:17:35.197316+0000 mgr.y (mgr.24416) 67 : audit [DBG] from='client.24542 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T21:17:36.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:35 vm07 bash[20771]: audit 2026-03-09T21:17:35.197316+0000 mgr.y (mgr.24416) 67 : audit [DBG] from='client.24542 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T21:17:36.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:35 vm07 bash[20771]: cluster 2026-03-09T21:17:35.732010+0000 mgr.y (mgr.24416) 68 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:36.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:35 vm07 bash[20771]: cluster 2026-03-09T21:17:35.732010+0000 mgr.y (mgr.24416) 68 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:36.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:35 vm10 bash[23387]: audit 2026-03-09T21:17:35.197316+0000 mgr.y (mgr.24416) 67 : audit [DBG] from='client.24542 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T21:17:36.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:35 vm10 bash[23387]: audit 2026-03-09T21:17:35.197316+0000 mgr.y (mgr.24416) 67 : audit [DBG] from='client.24542 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T21:17:36.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:35 vm10 bash[23387]: cluster 2026-03-09T21:17:35.732010+0000 mgr.y (mgr.24416) 68 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:36.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:35 vm10 bash[23387]: cluster 2026-03-09T21:17:35.732010+0000 mgr.y (mgr.24416) 68 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:36.442 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:17:36 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:17:37.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:36 vm07 bash[20771]: audit 2026-03-09T21:17:36.210922+0000 mgr.y (mgr.24416) 69 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:37.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:36 vm07 bash[20771]: audit 2026-03-09T21:17:36.210922+0000 mgr.y (mgr.24416) 69 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:37.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:36 vm07 bash[28052]: audit 2026-03-09T21:17:36.210922+0000 mgr.y (mgr.24416) 69 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:37.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:36 vm07 bash[28052]: audit 2026-03-09T21:17:36.210922+0000 mgr.y (mgr.24416) 69 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:37.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:36 vm10 bash[23387]: audit 2026-03-09T21:17:36.210922+0000 mgr.y (mgr.24416) 69 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:37.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:36 vm10 bash[23387]: audit 2026-03-09T21:17:36.210922+0000 mgr.y (mgr.24416) 69 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:38.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:37 vm07 bash[20771]: cluster 2026-03-09T21:17:37.732362+0000 mgr.y (mgr.24416) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:38.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:37 vm07 bash[20771]: cluster 2026-03-09T21:17:37.732362+0000 mgr.y (mgr.24416) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:38.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:37 vm07 bash[28052]: cluster 2026-03-09T21:17:37.732362+0000 mgr.y (mgr.24416) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:38.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:37 vm07 bash[28052]: cluster 2026-03-09T21:17:37.732362+0000 mgr.y (mgr.24416) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:38.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:37 vm10 bash[23387]: cluster 2026-03-09T21:17:37.732362+0000 mgr.y (mgr.24416) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:38.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:37 vm10 bash[23387]: cluster 2026-03-09T21:17:37.732362+0000 mgr.y (mgr.24416) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:39.115 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:17:38 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:17:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:17:39.966 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:17:40.259 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:17:40.259 INFO:teuthology.orchestra.run.vm07.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-09T21:17:40.475 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-09T21:17:40.475 INFO:tasks.cephadm:Setup complete, yielding 2026-03-09T21:17:40.475 INFO:teuthology.run_tasks:Running task workunit... 2026-03-09T21:17:40.479 INFO:tasks.workunit:Pulling workunits from ref 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-09T21:17:40.480 INFO:tasks.workunit:Making a separate scratch dir for every client... 2026-03-09T21:17:40.480 DEBUG:teuthology.orchestra.run.vm07:> stat -- /home/ubuntu/cephtest/mnt.0 2026-03-09T21:17:40.483 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T21:17:40.483 INFO:teuthology.orchestra.run.vm07.stderr:stat: cannot statx '/home/ubuntu/cephtest/mnt.0': No such file or directory 2026-03-09T21:17:40.484 DEBUG:teuthology.orchestra.run.vm07:> mkdir -- /home/ubuntu/cephtest/mnt.0 2026-03-09T21:17:40.527 INFO:tasks.workunit:Created dir /home/ubuntu/cephtest/mnt.0 2026-03-09T21:17:40.589 DEBUG:teuthology.orchestra.run.vm07:> cd -- /home/ubuntu/cephtest/mnt.0 && mkdir -- client.0 2026-03-09T21:17:40.592 INFO:tasks.workunit:timeout=1h 2026-03-09T21:17:40.592 INFO:tasks.workunit:cleanup=True 2026-03-09T21:17:40.592 DEBUG:teuthology.orchestra.run.vm07:> rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-09T21:17:40.638 INFO:tasks.workunit.client.0.vm07.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.0'... 2026-03-09T21:17:40.796 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:40 vm07 bash[20771]: cluster 2026-03-09T21:17:39.732828+0000 mgr.y (mgr.24416) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:40.796 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:40 vm07 bash[20771]: cluster 2026-03-09T21:17:39.732828+0000 mgr.y (mgr.24416) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:41.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:40 vm07 bash[28052]: cluster 2026-03-09T21:17:39.732828+0000 mgr.y (mgr.24416) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:41.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:40 vm07 bash[28052]: cluster 2026-03-09T21:17:39.732828+0000 mgr.y (mgr.24416) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:41.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:40 vm07 bash[28052]: audit 2026-03-09T21:17:40.259192+0000 mon.b (mon.1) 39 : audit [DBG] from='client.? 192.168.123.107:0/3261977976' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T21:17:41.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:40 vm07 bash[28052]: audit 2026-03-09T21:17:40.259192+0000 mon.b (mon.1) 39 : audit [DBG] from='client.? 192.168.123.107:0/3261977976' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T21:17:41.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:40 vm07 bash[20771]: audit 2026-03-09T21:17:40.259192+0000 mon.b (mon.1) 39 : audit [DBG] from='client.? 192.168.123.107:0/3261977976' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T21:17:41.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:40 vm07 bash[20771]: audit 2026-03-09T21:17:40.259192+0000 mon.b (mon.1) 39 : audit [DBG] from='client.? 192.168.123.107:0/3261977976' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T21:17:41.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:40 vm10 bash[23387]: cluster 2026-03-09T21:17:39.732828+0000 mgr.y (mgr.24416) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:41.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:40 vm10 bash[23387]: cluster 2026-03-09T21:17:39.732828+0000 mgr.y (mgr.24416) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:41.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:40 vm10 bash[23387]: audit 2026-03-09T21:17:40.259192+0000 mon.b (mon.1) 39 : audit [DBG] from='client.? 192.168.123.107:0/3261977976' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T21:17:41.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:40 vm10 bash[23387]: audit 2026-03-09T21:17:40.259192+0000 mon.b (mon.1) 39 : audit [DBG] from='client.? 192.168.123.107:0/3261977976' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T21:17:42.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:41 vm07 bash[20771]: audit 2026-03-09T21:17:41.739178+0000 mon.c (mon.2) 60 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 2]}]: dispatch 2026-03-09T21:17:42.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:41 vm07 bash[20771]: audit 2026-03-09T21:17:41.739178+0000 mon.c (mon.2) 60 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 2]}]: dispatch 2026-03-09T21:17:42.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:41 vm07 bash[20771]: audit 2026-03-09T21:17:41.739541+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 2]}]: dispatch 2026-03-09T21:17:42.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:41 vm07 bash[20771]: audit 2026-03-09T21:17:41.739541+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 2]}]: dispatch 2026-03-09T21:17:42.073 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:41 vm07 bash[28052]: audit 2026-03-09T21:17:41.739178+0000 mon.c (mon.2) 60 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 2]}]: dispatch 2026-03-09T21:17:42.073 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:41 vm07 bash[28052]: audit 2026-03-09T21:17:41.739178+0000 mon.c (mon.2) 60 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 2]}]: dispatch 2026-03-09T21:17:42.073 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:41 vm07 bash[28052]: audit 2026-03-09T21:17:41.739541+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 2]}]: dispatch 2026-03-09T21:17:42.073 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:41 vm07 bash[28052]: audit 2026-03-09T21:17:41.739541+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 2]}]: dispatch 2026-03-09T21:17:42.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:41 vm10 bash[23387]: audit 2026-03-09T21:17:41.739178+0000 mon.c (mon.2) 60 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 2]}]: dispatch 2026-03-09T21:17:42.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:41 vm10 bash[23387]: audit 2026-03-09T21:17:41.739178+0000 mon.c (mon.2) 60 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 2]}]: dispatch 2026-03-09T21:17:42.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:41 vm10 bash[23387]: audit 2026-03-09T21:17:41.739541+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 2]}]: dispatch 2026-03-09T21:17:42.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:41 vm10 bash[23387]: audit 2026-03-09T21:17:41.739541+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 2]}]: dispatch 2026-03-09T21:17:43.114 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:42 vm07 bash[20771]: cluster 2026-03-09T21:17:41.733119+0000 mgr.y (mgr.24416) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:43.114 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:42 vm07 bash[20771]: cluster 2026-03-09T21:17:41.733119+0000 mgr.y (mgr.24416) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:43.114 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:42 vm07 bash[20771]: audit 2026-03-09T21:17:41.806918+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 2]}]': finished 2026-03-09T21:17:43.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:42 vm07 bash[20771]: audit 2026-03-09T21:17:41.806918+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 2]}]': finished 2026-03-09T21:17:43.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:42 vm07 bash[20771]: cluster 2026-03-09T21:17:41.814220+0000 mon.a (mon.0) 805 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-09T21:17:43.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:42 vm07 bash[20771]: cluster 2026-03-09T21:17:41.814220+0000 mon.a (mon.0) 805 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-09T21:17:43.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:42 vm07 bash[20771]: audit 2026-03-09T21:17:41.817239+0000 mon.c (mon.2) 61 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:17:43.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:42 vm07 bash[20771]: audit 2026-03-09T21:17:41.817239+0000 mon.c (mon.2) 61 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:17:43.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:42 vm07 bash[28052]: cluster 2026-03-09T21:17:41.733119+0000 mgr.y (mgr.24416) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:43.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:42 vm07 bash[28052]: cluster 2026-03-09T21:17:41.733119+0000 mgr.y (mgr.24416) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:43.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:42 vm07 bash[28052]: audit 2026-03-09T21:17:41.806918+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 2]}]': finished 2026-03-09T21:17:43.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:42 vm07 bash[28052]: audit 2026-03-09T21:17:41.806918+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 2]}]': finished 2026-03-09T21:17:43.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:42 vm07 bash[28052]: cluster 2026-03-09T21:17:41.814220+0000 mon.a (mon.0) 805 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-09T21:17:43.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:42 vm07 bash[28052]: cluster 2026-03-09T21:17:41.814220+0000 mon.a (mon.0) 805 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-09T21:17:43.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:42 vm07 bash[28052]: audit 2026-03-09T21:17:41.817239+0000 mon.c (mon.2) 61 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:17:43.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:42 vm07 bash[28052]: audit 2026-03-09T21:17:41.817239+0000 mon.c (mon.2) 61 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:17:43.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:42 vm10 bash[23387]: cluster 2026-03-09T21:17:41.733119+0000 mgr.y (mgr.24416) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:43.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:42 vm10 bash[23387]: cluster 2026-03-09T21:17:41.733119+0000 mgr.y (mgr.24416) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:43.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:42 vm10 bash[23387]: audit 2026-03-09T21:17:41.806918+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 2]}]': finished 2026-03-09T21:17:43.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:42 vm10 bash[23387]: audit 2026-03-09T21:17:41.806918+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24416 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 2]}]': finished 2026-03-09T21:17:43.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:42 vm10 bash[23387]: cluster 2026-03-09T21:17:41.814220+0000 mon.a (mon.0) 805 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-09T21:17:43.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:42 vm10 bash[23387]: cluster 2026-03-09T21:17:41.814220+0000 mon.a (mon.0) 805 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-09T21:17:43.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:42 vm10 bash[23387]: audit 2026-03-09T21:17:41.817239+0000 mon.c (mon.2) 61 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:17:43.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:42 vm10 bash[23387]: audit 2026-03-09T21:17:41.817239+0000 mon.c (mon.2) 61 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:17:44.114 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:43 vm07 bash[20771]: cluster 2026-03-09T21:17:42.830799+0000 mon.a (mon.0) 806 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-09T21:17:44.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:43 vm07 bash[20771]: cluster 2026-03-09T21:17:42.830799+0000 mon.a (mon.0) 806 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-09T21:17:44.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:43 vm07 bash[28052]: cluster 2026-03-09T21:17:42.830799+0000 mon.a (mon.0) 806 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-09T21:17:44.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:43 vm07 bash[28052]: cluster 2026-03-09T21:17:42.830799+0000 mon.a (mon.0) 806 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-09T21:17:44.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:43 vm10 bash[23387]: cluster 2026-03-09T21:17:42.830799+0000 mon.a (mon.0) 806 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-09T21:17:44.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:43 vm10 bash[23387]: cluster 2026-03-09T21:17:42.830799+0000 mon.a (mon.0) 806 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-09T21:17:45.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:44 vm07 bash[20771]: cluster 2026-03-09T21:17:43.733501+0000 mgr.y (mgr.24416) 73 : cluster [DBG] pgmap v36: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:45.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:44 vm07 bash[20771]: cluster 2026-03-09T21:17:43.733501+0000 mgr.y (mgr.24416) 73 : cluster [DBG] pgmap v36: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:45.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:44 vm07 bash[20771]: cluster 2026-03-09T21:17:43.827876+0000 mon.a (mon.0) 807 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T21:17:45.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:44 vm07 bash[20771]: cluster 2026-03-09T21:17:43.827876+0000 mon.a (mon.0) 807 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T21:17:45.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:44 vm07 bash[28052]: cluster 2026-03-09T21:17:43.733501+0000 mgr.y (mgr.24416) 73 : cluster [DBG] pgmap v36: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:45.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:44 vm07 bash[28052]: cluster 2026-03-09T21:17:43.733501+0000 mgr.y (mgr.24416) 73 : cluster [DBG] pgmap v36: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:45.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:44 vm07 bash[28052]: cluster 2026-03-09T21:17:43.827876+0000 mon.a (mon.0) 807 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T21:17:45.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:44 vm07 bash[28052]: cluster 2026-03-09T21:17:43.827876+0000 mon.a (mon.0) 807 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T21:17:45.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:44 vm10 bash[23387]: cluster 2026-03-09T21:17:43.733501+0000 mgr.y (mgr.24416) 73 : cluster [DBG] pgmap v36: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:45.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:44 vm10 bash[23387]: cluster 2026-03-09T21:17:43.733501+0000 mgr.y (mgr.24416) 73 : cluster [DBG] pgmap v36: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:45.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:44 vm10 bash[23387]: cluster 2026-03-09T21:17:43.827876+0000 mon.a (mon.0) 807 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T21:17:45.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:44 vm10 bash[23387]: cluster 2026-03-09T21:17:43.827876+0000 mon.a (mon.0) 807 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T21:17:45.942 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:17:45 vm10 bash[51199]: logger=infra.usagestats t=2026-03-09T21:17:45.503135041Z level=info msg="Usage stats are ready to report" 2026-03-09T21:17:46.692 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:17:46 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:17:47.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:46 vm10 bash[23387]: cluster 2026-03-09T21:17:45.733831+0000 mgr.y (mgr.24416) 74 : cluster [DBG] pgmap v37: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:47.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:46 vm10 bash[23387]: cluster 2026-03-09T21:17:45.733831+0000 mgr.y (mgr.24416) 74 : cluster [DBG] pgmap v37: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:47.364 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:46 vm07 bash[20771]: cluster 2026-03-09T21:17:45.733831+0000 mgr.y (mgr.24416) 74 : cluster [DBG] pgmap v37: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:47.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:46 vm07 bash[20771]: cluster 2026-03-09T21:17:45.733831+0000 mgr.y (mgr.24416) 74 : cluster [DBG] pgmap v37: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:47.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:46 vm07 bash[28052]: cluster 2026-03-09T21:17:45.733831+0000 mgr.y (mgr.24416) 74 : cluster [DBG] pgmap v37: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:47.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:46 vm07 bash[28052]: cluster 2026-03-09T21:17:45.733831+0000 mgr.y (mgr.24416) 74 : cluster [DBG] pgmap v37: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:47 vm10 bash[23387]: audit 2026-03-09T21:17:46.221911+0000 mgr.y (mgr.24416) 75 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:47 vm10 bash[23387]: audit 2026-03-09T21:17:46.221911+0000 mgr.y (mgr.24416) 75 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:48.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:47 vm07 bash[20771]: audit 2026-03-09T21:17:46.221911+0000 mgr.y (mgr.24416) 75 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:48.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:47 vm07 bash[20771]: audit 2026-03-09T21:17:46.221911+0000 mgr.y (mgr.24416) 75 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:48.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:47 vm07 bash[28052]: audit 2026-03-09T21:17:46.221911+0000 mgr.y (mgr.24416) 75 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:48.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:47 vm07 bash[28052]: audit 2026-03-09T21:17:46.221911+0000 mgr.y (mgr.24416) 75 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:49.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:48 vm07 bash[20771]: cluster 2026-03-09T21:17:47.734239+0000 mgr.y (mgr.24416) 76 : cluster [DBG] pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T21:17:49.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:48 vm07 bash[20771]: cluster 2026-03-09T21:17:47.734239+0000 mgr.y (mgr.24416) 76 : cluster [DBG] pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T21:17:49.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:48 vm07 bash[20771]: cluster 2026-03-09T21:17:47.878344+0000 mon.a (mon.0) 808 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T21:17:49.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:48 vm07 bash[20771]: cluster 2026-03-09T21:17:47.878344+0000 mon.a (mon.0) 808 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T21:17:49.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:48 vm07 bash[20771]: cluster 2026-03-09T21:17:47.878376+0000 mon.a (mon.0) 809 : cluster [INF] Cluster is now healthy 2026-03-09T21:17:49.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:48 vm07 bash[20771]: cluster 2026-03-09T21:17:47.878376+0000 mon.a (mon.0) 809 : cluster [INF] Cluster is now healthy 2026-03-09T21:17:49.115 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:17:48 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:17:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:17:49.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:48 vm07 bash[28052]: cluster 2026-03-09T21:17:47.734239+0000 mgr.y (mgr.24416) 76 : cluster [DBG] pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T21:17:49.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:48 vm07 bash[28052]: cluster 2026-03-09T21:17:47.734239+0000 mgr.y (mgr.24416) 76 : cluster [DBG] pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T21:17:49.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:48 vm07 bash[28052]: cluster 2026-03-09T21:17:47.878344+0000 mon.a (mon.0) 808 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T21:17:49.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:48 vm07 bash[28052]: cluster 2026-03-09T21:17:47.878344+0000 mon.a (mon.0) 808 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T21:17:49.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:48 vm07 bash[28052]: cluster 2026-03-09T21:17:47.878376+0000 mon.a (mon.0) 809 : cluster [INF] Cluster is now healthy 2026-03-09T21:17:49.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:48 vm07 bash[28052]: cluster 2026-03-09T21:17:47.878376+0000 mon.a (mon.0) 809 : cluster [INF] Cluster is now healthy 2026-03-09T21:17:49.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:48 vm10 bash[23387]: cluster 2026-03-09T21:17:47.734239+0000 mgr.y (mgr.24416) 76 : cluster [DBG] pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T21:17:49.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:48 vm10 bash[23387]: cluster 2026-03-09T21:17:47.734239+0000 mgr.y (mgr.24416) 76 : cluster [DBG] pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T21:17:49.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:48 vm10 bash[23387]: cluster 2026-03-09T21:17:47.878344+0000 mon.a (mon.0) 808 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T21:17:49.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:48 vm10 bash[23387]: cluster 2026-03-09T21:17:47.878344+0000 mon.a (mon.0) 808 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T21:17:49.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:48 vm10 bash[23387]: cluster 2026-03-09T21:17:47.878376+0000 mon.a (mon.0) 809 : cluster [INF] Cluster is now healthy 2026-03-09T21:17:49.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:48 vm10 bash[23387]: cluster 2026-03-09T21:17:47.878376+0000 mon.a (mon.0) 809 : cluster [INF] Cluster is now healthy 2026-03-09T21:17:50.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:50 vm10 bash[23387]: cluster 2026-03-09T21:17:49.734697+0000 mgr.y (mgr.24416) 77 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:50.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:50 vm10 bash[23387]: cluster 2026-03-09T21:17:49.734697+0000 mgr.y (mgr.24416) 77 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:50.614 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:50 vm07 bash[20771]: cluster 2026-03-09T21:17:49.734697+0000 mgr.y (mgr.24416) 77 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:50.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:50 vm07 bash[20771]: cluster 2026-03-09T21:17:49.734697+0000 mgr.y (mgr.24416) 77 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:50.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:50 vm07 bash[28052]: cluster 2026-03-09T21:17:49.734697+0000 mgr.y (mgr.24416) 77 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:50.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:50 vm07 bash[28052]: cluster 2026-03-09T21:17:49.734697+0000 mgr.y (mgr.24416) 77 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:17:53.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:52 vm07 bash[20771]: cluster 2026-03-09T21:17:51.735050+0000 mgr.y (mgr.24416) 78 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T21:17:53.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:52 vm07 bash[20771]: cluster 2026-03-09T21:17:51.735050+0000 mgr.y (mgr.24416) 78 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T21:17:53.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:52 vm07 bash[28052]: cluster 2026-03-09T21:17:51.735050+0000 mgr.y (mgr.24416) 78 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T21:17:53.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:52 vm07 bash[28052]: cluster 2026-03-09T21:17:51.735050+0000 mgr.y (mgr.24416) 78 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T21:17:53.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:52 vm10 bash[23387]: cluster 2026-03-09T21:17:51.735050+0000 mgr.y (mgr.24416) 78 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T21:17:53.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:52 vm10 bash[23387]: cluster 2026-03-09T21:17:51.735050+0000 mgr.y (mgr.24416) 78 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T21:17:55.114 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:54 vm07 bash[20771]: cluster 2026-03-09T21:17:53.735645+0000 mgr.y (mgr.24416) 79 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 939 B/s rd, 0 op/s 2026-03-09T21:17:55.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:54 vm07 bash[20771]: cluster 2026-03-09T21:17:53.735645+0000 mgr.y (mgr.24416) 79 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 939 B/s rd, 0 op/s 2026-03-09T21:17:55.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:54 vm07 bash[28052]: cluster 2026-03-09T21:17:53.735645+0000 mgr.y (mgr.24416) 79 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 939 B/s rd, 0 op/s 2026-03-09T21:17:55.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:54 vm07 bash[28052]: cluster 2026-03-09T21:17:53.735645+0000 mgr.y (mgr.24416) 79 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 939 B/s rd, 0 op/s 2026-03-09T21:17:55.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:54 vm10 bash[23387]: cluster 2026-03-09T21:17:53.735645+0000 mgr.y (mgr.24416) 79 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 939 B/s rd, 0 op/s 2026-03-09T21:17:55.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:54 vm10 bash[23387]: cluster 2026-03-09T21:17:53.735645+0000 mgr.y (mgr.24416) 79 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 939 B/s rd, 0 op/s 2026-03-09T21:17:56.692 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:17:56 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:17:57.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:56 vm07 bash[20771]: cluster 2026-03-09T21:17:55.736024+0000 mgr.y (mgr.24416) 80 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:57.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:56 vm07 bash[20771]: cluster 2026-03-09T21:17:55.736024+0000 mgr.y (mgr.24416) 80 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:57.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:56 vm07 bash[28052]: cluster 2026-03-09T21:17:55.736024+0000 mgr.y (mgr.24416) 80 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:57.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:56 vm07 bash[28052]: cluster 2026-03-09T21:17:55.736024+0000 mgr.y (mgr.24416) 80 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:57.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:56 vm10 bash[23387]: cluster 2026-03-09T21:17:55.736024+0000 mgr.y (mgr.24416) 80 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:57.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:56 vm10 bash[23387]: cluster 2026-03-09T21:17:55.736024+0000 mgr.y (mgr.24416) 80 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:58.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:57 vm07 bash[20771]: audit 2026-03-09T21:17:56.232259+0000 mgr.y (mgr.24416) 81 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:58.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:57 vm07 bash[20771]: audit 2026-03-09T21:17:56.232259+0000 mgr.y (mgr.24416) 81 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:58.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:57 vm07 bash[20771]: audit 2026-03-09T21:17:56.825086+0000 mon.c (mon.2) 62 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:17:58.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:57 vm07 bash[20771]: audit 2026-03-09T21:17:56.825086+0000 mon.c (mon.2) 62 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:17:58.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:57 vm07 bash[28052]: audit 2026-03-09T21:17:56.232259+0000 mgr.y (mgr.24416) 81 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:58.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:57 vm07 bash[28052]: audit 2026-03-09T21:17:56.232259+0000 mgr.y (mgr.24416) 81 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:58.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:57 vm07 bash[28052]: audit 2026-03-09T21:17:56.825086+0000 mon.c (mon.2) 62 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:17:58.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:57 vm07 bash[28052]: audit 2026-03-09T21:17:56.825086+0000 mon.c (mon.2) 62 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:17:58.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:57 vm10 bash[23387]: audit 2026-03-09T21:17:56.232259+0000 mgr.y (mgr.24416) 81 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:58.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:57 vm10 bash[23387]: audit 2026-03-09T21:17:56.232259+0000 mgr.y (mgr.24416) 81 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:17:58.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:57 vm10 bash[23387]: audit 2026-03-09T21:17:56.825086+0000 mon.c (mon.2) 62 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:17:58.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:57 vm10 bash[23387]: audit 2026-03-09T21:17:56.825086+0000 mon.c (mon.2) 62 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:17:58.930 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:17:58 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:17:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:17:58.930 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:58 vm07 bash[20771]: cluster 2026-03-09T21:17:57.736514+0000 mgr.y (mgr.24416) 82 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:58.930 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:58 vm07 bash[20771]: cluster 2026-03-09T21:17:57.736514+0000 mgr.y (mgr.24416) 82 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:59.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:59 vm07 bash[28052]: cluster 2026-03-09T21:17:57.736514+0000 mgr.y (mgr.24416) 82 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:59.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:59 vm07 bash[28052]: cluster 2026-03-09T21:17:57.736514+0000 mgr.y (mgr.24416) 82 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:59.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:59 vm10 bash[23387]: cluster 2026-03-09T21:17:57.736514+0000 mgr.y (mgr.24416) 82 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:17:59.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:59 vm10 bash[23387]: cluster 2026-03-09T21:17:57.736514+0000 mgr.y (mgr.24416) 82 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:00.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:59 vm07 bash[20771]: cluster 2026-03-09T21:17:59.737003+0000 mgr.y (mgr.24416) 83 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:00.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:17:59 vm07 bash[20771]: cluster 2026-03-09T21:17:59.737003+0000 mgr.y (mgr.24416) 83 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:00.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:59 vm07 bash[28052]: cluster 2026-03-09T21:17:59.737003+0000 mgr.y (mgr.24416) 83 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:00.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:17:59 vm07 bash[28052]: cluster 2026-03-09T21:17:59.737003+0000 mgr.y (mgr.24416) 83 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:00.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:59 vm10 bash[23387]: cluster 2026-03-09T21:17:59.737003+0000 mgr.y (mgr.24416) 83 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:00.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:17:59 vm10 bash[23387]: cluster 2026-03-09T21:17:59.737003+0000 mgr.y (mgr.24416) 83 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:03.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:02 vm07 bash[20771]: cluster 2026-03-09T21:18:01.737306+0000 mgr.y (mgr.24416) 84 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:03.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:02 vm07 bash[20771]: cluster 2026-03-09T21:18:01.737306+0000 mgr.y (mgr.24416) 84 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:03.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:02 vm07 bash[28052]: cluster 2026-03-09T21:18:01.737306+0000 mgr.y (mgr.24416) 84 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:03.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:02 vm07 bash[28052]: cluster 2026-03-09T21:18:01.737306+0000 mgr.y (mgr.24416) 84 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:03.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:02 vm10 bash[23387]: cluster 2026-03-09T21:18:01.737306+0000 mgr.y (mgr.24416) 84 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:03.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:02 vm10 bash[23387]: cluster 2026-03-09T21:18:01.737306+0000 mgr.y (mgr.24416) 84 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:05.114 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:04 vm07 bash[20771]: cluster 2026-03-09T21:18:03.737787+0000 mgr.y (mgr.24416) 85 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:05.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:04 vm07 bash[20771]: cluster 2026-03-09T21:18:03.737787+0000 mgr.y (mgr.24416) 85 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:05.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:04 vm07 bash[28052]: cluster 2026-03-09T21:18:03.737787+0000 mgr.y (mgr.24416) 85 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:05.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:04 vm07 bash[28052]: cluster 2026-03-09T21:18:03.737787+0000 mgr.y (mgr.24416) 85 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:05.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:04 vm10 bash[23387]: cluster 2026-03-09T21:18:03.737787+0000 mgr.y (mgr.24416) 85 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:05.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:04 vm10 bash[23387]: cluster 2026-03-09T21:18:03.737787+0000 mgr.y (mgr.24416) 85 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:06.692 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:18:06 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:18:07.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:06 vm07 bash[20771]: cluster 2026-03-09T21:18:05.738082+0000 mgr.y (mgr.24416) 86 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:07.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:06 vm07 bash[20771]: cluster 2026-03-09T21:18:05.738082+0000 mgr.y (mgr.24416) 86 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:07.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:06 vm07 bash[28052]: cluster 2026-03-09T21:18:05.738082+0000 mgr.y (mgr.24416) 86 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:07.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:06 vm07 bash[28052]: cluster 2026-03-09T21:18:05.738082+0000 mgr.y (mgr.24416) 86 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:07.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:06 vm10 bash[23387]: cluster 2026-03-09T21:18:05.738082+0000 mgr.y (mgr.24416) 86 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:07.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:06 vm10 bash[23387]: cluster 2026-03-09T21:18:05.738082+0000 mgr.y (mgr.24416) 86 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:08.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:07 vm07 bash[20771]: audit 2026-03-09T21:18:06.235501+0000 mgr.y (mgr.24416) 87 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:08.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:07 vm07 bash[20771]: audit 2026-03-09T21:18:06.235501+0000 mgr.y (mgr.24416) 87 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:08.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:07 vm07 bash[28052]: audit 2026-03-09T21:18:06.235501+0000 mgr.y (mgr.24416) 87 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:08.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:07 vm07 bash[28052]: audit 2026-03-09T21:18:06.235501+0000 mgr.y (mgr.24416) 87 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:08.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:07 vm10 bash[23387]: audit 2026-03-09T21:18:06.235501+0000 mgr.y (mgr.24416) 87 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:08.454 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:07 vm10 bash[23387]: audit 2026-03-09T21:18:06.235501+0000 mgr.y (mgr.24416) 87 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:08.967 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:18:08 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:18:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:18:08.968 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:08 vm07 bash[20771]: cluster 2026-03-09T21:18:07.738486+0000 mgr.y (mgr.24416) 88 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:08.968 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:08 vm07 bash[20771]: cluster 2026-03-09T21:18:07.738486+0000 mgr.y (mgr.24416) 88 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:09.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:08 vm07 bash[28052]: cluster 2026-03-09T21:18:07.738486+0000 mgr.y (mgr.24416) 88 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:09.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:08 vm07 bash[28052]: cluster 2026-03-09T21:18:07.738486+0000 mgr.y (mgr.24416) 88 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:09.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:08 vm10 bash[23387]: cluster 2026-03-09T21:18:07.738486+0000 mgr.y (mgr.24416) 88 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:09.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:08 vm10 bash[23387]: cluster 2026-03-09T21:18:07.738486+0000 mgr.y (mgr.24416) 88 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:10.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:09 vm07 bash[20771]: cluster 2026-03-09T21:18:09.738955+0000 mgr.y (mgr.24416) 89 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:10.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:09 vm07 bash[20771]: cluster 2026-03-09T21:18:09.738955+0000 mgr.y (mgr.24416) 89 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:10.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:09 vm07 bash[28052]: cluster 2026-03-09T21:18:09.738955+0000 mgr.y (mgr.24416) 89 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:10.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:09 vm07 bash[28052]: cluster 2026-03-09T21:18:09.738955+0000 mgr.y (mgr.24416) 89 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:10.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:09 vm10 bash[23387]: cluster 2026-03-09T21:18:09.738955+0000 mgr.y (mgr.24416) 89 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:10.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:09 vm10 bash[23387]: cluster 2026-03-09T21:18:09.738955+0000 mgr.y (mgr.24416) 89 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:13.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:12 vm10 bash[23387]: cluster 2026-03-09T21:18:11.739286+0000 mgr.y (mgr.24416) 90 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:13.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:12 vm10 bash[23387]: cluster 2026-03-09T21:18:11.739286+0000 mgr.y (mgr.24416) 90 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:13.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:12 vm10 bash[23387]: audit 2026-03-09T21:18:11.889835+0000 mon.c (mon.2) 63 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:18:13.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:12 vm10 bash[23387]: audit 2026-03-09T21:18:11.889835+0000 mon.c (mon.2) 63 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:18:13.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:12 vm07 bash[20771]: cluster 2026-03-09T21:18:11.739286+0000 mgr.y (mgr.24416) 90 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:13.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:12 vm07 bash[20771]: cluster 2026-03-09T21:18:11.739286+0000 mgr.y (mgr.24416) 90 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:13.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:12 vm07 bash[20771]: audit 2026-03-09T21:18:11.889835+0000 mon.c (mon.2) 63 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:18:13.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:12 vm07 bash[20771]: audit 2026-03-09T21:18:11.889835+0000 mon.c (mon.2) 63 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:18:13.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:12 vm07 bash[28052]: cluster 2026-03-09T21:18:11.739286+0000 mgr.y (mgr.24416) 90 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:13.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:12 vm07 bash[28052]: cluster 2026-03-09T21:18:11.739286+0000 mgr.y (mgr.24416) 90 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:13.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:12 vm07 bash[28052]: audit 2026-03-09T21:18:11.889835+0000 mon.c (mon.2) 63 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:18:13.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:12 vm07 bash[28052]: audit 2026-03-09T21:18:11.889835+0000 mon.c (mon.2) 63 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:18:15.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:15 vm07 bash[20771]: cluster 2026-03-09T21:18:13.739809+0000 mgr.y (mgr.24416) 91 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:15.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:15 vm07 bash[20771]: cluster 2026-03-09T21:18:13.739809+0000 mgr.y (mgr.24416) 91 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:15.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:15 vm07 bash[28052]: cluster 2026-03-09T21:18:13.739809+0000 mgr.y (mgr.24416) 91 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:15.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:15 vm07 bash[28052]: cluster 2026-03-09T21:18:13.739809+0000 mgr.y (mgr.24416) 91 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:15.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:15 vm10 bash[23387]: cluster 2026-03-09T21:18:13.739809+0000 mgr.y (mgr.24416) 91 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:15.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:15 vm10 bash[23387]: cluster 2026-03-09T21:18:13.739809+0000 mgr.y (mgr.24416) 91 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:16.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:16 vm07 bash[20771]: cluster 2026-03-09T21:18:15.740180+0000 mgr.y (mgr.24416) 92 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:16.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:16 vm07 bash[20771]: cluster 2026-03-09T21:18:15.740180+0000 mgr.y (mgr.24416) 92 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:16.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:16 vm07 bash[28052]: cluster 2026-03-09T21:18:15.740180+0000 mgr.y (mgr.24416) 92 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:16.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:16 vm07 bash[28052]: cluster 2026-03-09T21:18:15.740180+0000 mgr.y (mgr.24416) 92 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:16.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:16 vm10 bash[23387]: cluster 2026-03-09T21:18:15.740180+0000 mgr.y (mgr.24416) 92 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:16.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:16 vm10 bash[23387]: cluster 2026-03-09T21:18:15.740180+0000 mgr.y (mgr.24416) 92 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:16.442 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:18:16 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:18:17.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:17 vm07 bash[20771]: audit 2026-03-09T21:18:16.242029+0000 mgr.y (mgr.24416) 93 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:17.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:17 vm07 bash[20771]: audit 2026-03-09T21:18:16.242029+0000 mgr.y (mgr.24416) 93 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:17.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:17 vm07 bash[28052]: audit 2026-03-09T21:18:16.242029+0000 mgr.y (mgr.24416) 93 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:17.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:17 vm07 bash[28052]: audit 2026-03-09T21:18:16.242029+0000 mgr.y (mgr.24416) 93 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:17.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:17 vm10 bash[23387]: audit 2026-03-09T21:18:16.242029+0000 mgr.y (mgr.24416) 93 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:17.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:17 vm10 bash[23387]: audit 2026-03-09T21:18:16.242029+0000 mgr.y (mgr.24416) 93 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:18.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:18 vm07 bash[20771]: audit 2026-03-09T21:18:17.561647+0000 mon.c (mon.2) 64 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:18:18.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:18 vm07 bash[20771]: audit 2026-03-09T21:18:17.561647+0000 mon.c (mon.2) 64 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:18:18.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:18 vm07 bash[20771]: cluster 2026-03-09T21:18:17.740614+0000 mgr.y (mgr.24416) 94 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:18.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:18 vm07 bash[20771]: cluster 2026-03-09T21:18:17.740614+0000 mgr.y (mgr.24416) 94 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:18.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:18 vm07 bash[28052]: audit 2026-03-09T21:18:17.561647+0000 mon.c (mon.2) 64 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:18:18.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:18 vm07 bash[28052]: audit 2026-03-09T21:18:17.561647+0000 mon.c (mon.2) 64 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:18:18.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:18 vm07 bash[28052]: cluster 2026-03-09T21:18:17.740614+0000 mgr.y (mgr.24416) 94 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:18.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:18 vm07 bash[28052]: cluster 2026-03-09T21:18:17.740614+0000 mgr.y (mgr.24416) 94 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:18.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:18 vm10 bash[23387]: audit 2026-03-09T21:18:17.561647+0000 mon.c (mon.2) 64 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:18:18.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:18 vm10 bash[23387]: audit 2026-03-09T21:18:17.561647+0000 mon.c (mon.2) 64 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:18:18.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:18 vm10 bash[23387]: cluster 2026-03-09T21:18:17.740614+0000 mgr.y (mgr.24416) 94 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:18.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:18 vm10 bash[23387]: cluster 2026-03-09T21:18:17.740614+0000 mgr.y (mgr.24416) 94 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:19.115 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:18:18 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:18:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:18:21.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:21 vm10 bash[23387]: cluster 2026-03-09T21:18:19.741051+0000 mgr.y (mgr.24416) 95 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:21.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:21 vm10 bash[23387]: cluster 2026-03-09T21:18:19.741051+0000 mgr.y (mgr.24416) 95 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:21.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:21 vm07 bash[20771]: cluster 2026-03-09T21:18:19.741051+0000 mgr.y (mgr.24416) 95 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:21.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:21 vm07 bash[20771]: cluster 2026-03-09T21:18:19.741051+0000 mgr.y (mgr.24416) 95 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:21.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:21 vm07 bash[28052]: cluster 2026-03-09T21:18:19.741051+0000 mgr.y (mgr.24416) 95 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:21.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:21 vm07 bash[28052]: cluster 2026-03-09T21:18:19.741051+0000 mgr.y (mgr.24416) 95 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:22.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:22 vm07 bash[20771]: cluster 2026-03-09T21:18:21.741485+0000 mgr.y (mgr.24416) 96 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:22.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:22 vm07 bash[20771]: cluster 2026-03-09T21:18:21.741485+0000 mgr.y (mgr.24416) 96 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:22.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:22 vm07 bash[28052]: cluster 2026-03-09T21:18:21.741485+0000 mgr.y (mgr.24416) 96 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:22.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:22 vm07 bash[28052]: cluster 2026-03-09T21:18:21.741485+0000 mgr.y (mgr.24416) 96 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:22.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:22 vm10 bash[23387]: cluster 2026-03-09T21:18:21.741485+0000 mgr.y (mgr.24416) 96 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:22.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:22 vm10 bash[23387]: cluster 2026-03-09T21:18:21.741485+0000 mgr.y (mgr.24416) 96 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:18:22.736 INFO:tasks.workunit.client.0.vm07.stderr:Note: switching to '569c3e99c9b32a51b4eaf08731c728f4513ed589'. 2026-03-09T21:18:22.736 INFO:tasks.workunit.client.0.vm07.stderr: 2026-03-09T21:18:22.736 INFO:tasks.workunit.client.0.vm07.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-09T21:18:22.736 INFO:tasks.workunit.client.0.vm07.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-09T21:18:22.736 INFO:tasks.workunit.client.0.vm07.stderr:state without impacting any branches by switching back to a branch. 2026-03-09T21:18:22.736 INFO:tasks.workunit.client.0.vm07.stderr: 2026-03-09T21:18:22.736 INFO:tasks.workunit.client.0.vm07.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-09T21:18:22.736 INFO:tasks.workunit.client.0.vm07.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-09T21:18:22.736 INFO:tasks.workunit.client.0.vm07.stderr: 2026-03-09T21:18:22.736 INFO:tasks.workunit.client.0.vm07.stderr: git switch -c 2026-03-09T21:18:22.736 INFO:tasks.workunit.client.0.vm07.stderr: 2026-03-09T21:18:22.736 INFO:tasks.workunit.client.0.vm07.stderr:Or undo this operation with: 2026-03-09T21:18:22.737 INFO:tasks.workunit.client.0.vm07.stderr: 2026-03-09T21:18:22.737 INFO:tasks.workunit.client.0.vm07.stderr: git switch - 2026-03-09T21:18:22.737 INFO:tasks.workunit.client.0.vm07.stderr: 2026-03-09T21:18:22.737 INFO:tasks.workunit.client.0.vm07.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-09T21:18:22.737 INFO:tasks.workunit.client.0.vm07.stderr: 2026-03-09T21:18:22.737 INFO:tasks.workunit.client.0.vm07.stderr:HEAD is now at 569c3e99c9b qa/rgw: bucket notifications use pynose 2026-03-09T21:18:22.747 DEBUG:teuthology.orchestra.run.vm07:> cd -- /home/ubuntu/cephtest/clone.client.0/qa/workunits && if test -e Makefile ; then make ; fi && find -executable -type f -printf '%P\0' >/home/ubuntu/cephtest/workunits.list.client.0 2026-03-09T21:18:22.793 INFO:tasks.workunit.client.0.vm07.stdout:for d in direct_io fs ; do ( cd $d ; make all ) ; done 2026-03-09T21:18:22.802 INFO:tasks.workunit.client.0.vm07.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-09T21:18:22.802 INFO:tasks.workunit.client.0.vm07.stdout:cc -Wall -Wextra -D_GNU_SOURCE direct_io_test.c -o direct_io_test 2026-03-09T21:18:22.877 INFO:tasks.workunit.client.0.vm07.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_sync_io.c -o test_sync_io 2026-03-09T21:18:22.922 INFO:tasks.workunit.client.0.vm07.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_short_dio_read.c -o test_short_dio_read 2026-03-09T21:18:22.957 INFO:tasks.workunit.client.0.vm07.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-09T21:18:22.959 INFO:tasks.workunit.client.0.vm07.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-09T21:18:22.960 INFO:tasks.workunit.client.0.vm07.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_o_trunc.c -o test_o_trunc 2026-03-09T21:18:22.993 INFO:tasks.workunit.client.0.vm07.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-09T21:18:22.996 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T21:18:22.996 DEBUG:teuthology.orchestra.run.vm07:> dd if=/home/ubuntu/cephtest/workunits.list.client.0 of=/dev/stdout 2026-03-09T21:18:23.048 INFO:tasks.workunit:Running workunits matching rados/test_python.sh on client.0... 2026-03-09T21:18:23.049 INFO:tasks.workunit:Running workunit rados/test_python.sh... 2026-03-09T21:18:23.049 DEBUG:teuthology.orchestra.run.vm07:workunit test rados/test_python.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=569c3e99c9b32a51b4eaf08731c728f4513ed589 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 1h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh 2026-03-09T21:18:23.099 INFO:tasks.workunit.client.0.vm07.stderr:+ ceph osd pool create rbd 2026-03-09T21:18:24.203 INFO:tasks.workunit.client.0.vm07.stderr:pool 'rbd' already exists 2026-03-09T21:18:24.219 INFO:tasks.workunit.client.0.vm07.stderr:+ dirname /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh 2026-03-09T21:18:24.219 INFO:tasks.workunit.client.0.vm07.stderr:+ python3 -m pytest -v /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/../../../src/test/pybind/test_rados.py 2026-03-09T21:18:24.325 INFO:tasks.workunit.client.0.vm07.stdout:============================= test session starts ============================== 2026-03-09T21:18:24.325 INFO:tasks.workunit.client.0.vm07.stdout:platform linux -- Python 3.10.12, pytest-6.2.5, py-1.10.0, pluggy-0.13.0 -- /usr/bin/python3 2026-03-09T21:18:24.325 INFO:tasks.workunit.client.0.vm07.stdout:cachedir: .pytest_cache 2026-03-09T21:18:24.325 INFO:tasks.workunit.client.0.vm07.stdout:rootdir: /home/ubuntu/cephtest/clone.client.0/src/test/pybind, configfile: pytest.ini 2026-03-09T21:18:24.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:24 vm07 bash[20771]: audit 2026-03-09T21:18:23.041380+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:24.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:24 vm07 bash[20771]: audit 2026-03-09T21:18:23.041380+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:24.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:24 vm07 bash[20771]: audit 2026-03-09T21:18:23.053574+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:24.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:24 vm07 bash[20771]: audit 2026-03-09T21:18:23.053574+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:24.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:24 vm07 bash[20771]: audit 2026-03-09T21:18:23.093553+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:24.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:24 vm07 bash[20771]: audit 2026-03-09T21:18:23.093553+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:24.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:24 vm07 bash[20771]: audit 2026-03-09T21:18:23.106094+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:24.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:24 vm07 bash[20771]: audit 2026-03-09T21:18:23.106094+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:24.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:24 vm07 bash[20771]: audit 2026-03-09T21:18:23.314267+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.107:0/4177182562' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T21:18:24.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:24 vm07 bash[20771]: audit 2026-03-09T21:18:23.314267+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.107:0/4177182562' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T21:18:24.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:24 vm07 bash[20771]: audit 2026-03-09T21:18:23.315190+0000 mon.a (mon.0) 814 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T21:18:24.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:24 vm07 bash[20771]: audit 2026-03-09T21:18:23.315190+0000 mon.a (mon.0) 814 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T21:18:24.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:24 vm07 bash[20771]: audit 2026-03-09T21:18:23.488859+0000 mon.c (mon.2) 65 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:18:24.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:24 vm07 bash[20771]: audit 2026-03-09T21:18:23.488859+0000 mon.c (mon.2) 65 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:18:24.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:24 vm07 bash[20771]: audit 2026-03-09T21:18:23.490313+0000 mon.c (mon.2) 66 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:18:24.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:24 vm07 bash[20771]: audit 2026-03-09T21:18:23.490313+0000 mon.c (mon.2) 66 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:18:24.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:24 vm07 bash[20771]: audit 2026-03-09T21:18:23.513257+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:24.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:24 vm07 bash[20771]: audit 2026-03-09T21:18:23.513257+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:24.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:24 vm07 bash[20771]: cluster 2026-03-09T21:18:23.742054+0000 mgr.y (mgr.24416) 97 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:24.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:24 vm07 bash[20771]: cluster 2026-03-09T21:18:23.742054+0000 mgr.y (mgr.24416) 97 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:24.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:24 vm07 bash[28052]: audit 2026-03-09T21:18:23.041380+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:24.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:24 vm07 bash[28052]: audit 2026-03-09T21:18:23.041380+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:24.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:24 vm07 bash[28052]: audit 2026-03-09T21:18:23.053574+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:24.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:24 vm07 bash[28052]: audit 2026-03-09T21:18:23.053574+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:24.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:24 vm07 bash[28052]: audit 2026-03-09T21:18:23.093553+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:24.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:24 vm07 bash[28052]: audit 2026-03-09T21:18:23.093553+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:24.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:24 vm07 bash[28052]: audit 2026-03-09T21:18:23.106094+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:24.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:24 vm07 bash[28052]: audit 2026-03-09T21:18:23.106094+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:24.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:24 vm07 bash[28052]: audit 2026-03-09T21:18:23.314267+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.107:0/4177182562' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T21:18:24.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:24 vm07 bash[28052]: audit 2026-03-09T21:18:23.314267+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.107:0/4177182562' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T21:18:24.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:24 vm07 bash[28052]: audit 2026-03-09T21:18:23.315190+0000 mon.a (mon.0) 814 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T21:18:24.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:24 vm07 bash[28052]: audit 2026-03-09T21:18:23.315190+0000 mon.a (mon.0) 814 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T21:18:24.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:24 vm07 bash[28052]: audit 2026-03-09T21:18:23.488859+0000 mon.c (mon.2) 65 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:18:24.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:24 vm07 bash[28052]: audit 2026-03-09T21:18:23.488859+0000 mon.c (mon.2) 65 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:18:24.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:24 vm07 bash[28052]: audit 2026-03-09T21:18:23.490313+0000 mon.c (mon.2) 66 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:18:24.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:24 vm07 bash[28052]: audit 2026-03-09T21:18:23.490313+0000 mon.c (mon.2) 66 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:18:24.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:24 vm07 bash[28052]: audit 2026-03-09T21:18:23.513257+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:24.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:24 vm07 bash[28052]: audit 2026-03-09T21:18:23.513257+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:24.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:24 vm07 bash[28052]: cluster 2026-03-09T21:18:23.742054+0000 mgr.y (mgr.24416) 97 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:24.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:24 vm07 bash[28052]: cluster 2026-03-09T21:18:23.742054+0000 mgr.y (mgr.24416) 97 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:24.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:24 vm10 bash[23387]: audit 2026-03-09T21:18:23.041380+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:24.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:24 vm10 bash[23387]: audit 2026-03-09T21:18:23.041380+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:24.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:24 vm10 bash[23387]: audit 2026-03-09T21:18:23.053574+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:24.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:24 vm10 bash[23387]: audit 2026-03-09T21:18:23.053574+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:24.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:24 vm10 bash[23387]: audit 2026-03-09T21:18:23.093553+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:24.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:24 vm10 bash[23387]: audit 2026-03-09T21:18:23.093553+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:24.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:24 vm10 bash[23387]: audit 2026-03-09T21:18:23.106094+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:24.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:24 vm10 bash[23387]: audit 2026-03-09T21:18:23.106094+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:24.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:24 vm10 bash[23387]: audit 2026-03-09T21:18:23.314267+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.107:0/4177182562' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T21:18:24.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:24 vm10 bash[23387]: audit 2026-03-09T21:18:23.314267+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.107:0/4177182562' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T21:18:24.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:24 vm10 bash[23387]: audit 2026-03-09T21:18:23.315190+0000 mon.a (mon.0) 814 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T21:18:24.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:24 vm10 bash[23387]: audit 2026-03-09T21:18:23.315190+0000 mon.a (mon.0) 814 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T21:18:24.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:24 vm10 bash[23387]: audit 2026-03-09T21:18:23.488859+0000 mon.c (mon.2) 65 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:18:24.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:24 vm10 bash[23387]: audit 2026-03-09T21:18:23.488859+0000 mon.c (mon.2) 65 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:18:24.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:24 vm10 bash[23387]: audit 2026-03-09T21:18:23.490313+0000 mon.c (mon.2) 66 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:18:24.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:24 vm10 bash[23387]: audit 2026-03-09T21:18:23.490313+0000 mon.c (mon.2) 66 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:18:24.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:24 vm10 bash[23387]: audit 2026-03-09T21:18:23.513257+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:24.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:24 vm10 bash[23387]: audit 2026-03-09T21:18:23.513257+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:24.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:24 vm10 bash[23387]: cluster 2026-03-09T21:18:23.742054+0000 mgr.y (mgr.24416) 97 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:24.443 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:24 vm10 bash[23387]: cluster 2026-03-09T21:18:23.742054+0000 mgr.y (mgr.24416) 97 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:24.531 INFO:tasks.workunit.client.0.vm07.stdout:collecting ... collected 91 items 2026-03-09T21:18:24.531 INFO:tasks.workunit.client.0.vm07.stdout: 2026-03-09T21:18:24.538 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::test_rados_init_error PASSED [ 1%] 2026-03-09T21:18:24.578 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::test_rados_init PASSED [ 2%] 2026-03-09T21:18:24.590 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::test_ioctx_context_manager PASSED [ 3%] 2026-03-09T21:18:24.595 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::test_parse_argv PASSED [ 4%] 2026-03-09T21:18:24.598 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::test_parse_argv_empty_str PASSED [ 5%] 2026-03-09T21:18:24.602 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRadosStateError::test_configuring PASSED [ 6%] 2026-03-09T21:18:24.612 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRadosStateError::test_connected PASSED [ 7%] 2026-03-09T21:18:24.623 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRadosStateError::test_shutdown PASSED [ 8%] 2026-03-09T21:18:24.637 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_ping_monitor PASSED [ 9%] 2026-03-09T21:18:24.649 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_annotations PASSED [ 10%] 2026-03-09T21:18:25.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:25 vm10 bash[23387]: audit 2026-03-09T21:18:24.120594+0000 mon.a (mon.0) 816 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "rbd"}]': finished 2026-03-09T21:18:25.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:25 vm10 bash[23387]: audit 2026-03-09T21:18:24.120594+0000 mon.a (mon.0) 816 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "rbd"}]': finished 2026-03-09T21:18:25.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:25 vm10 bash[23387]: cluster 2026-03-09T21:18:24.130756+0000 mon.a (mon.0) 817 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-09T21:18:25.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:25 vm10 bash[23387]: cluster 2026-03-09T21:18:24.130756+0000 mon.a (mon.0) 817 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-09T21:18:25.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:25 vm10 bash[23387]: audit 2026-03-09T21:18:24.202778+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.107:0/4177182562' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T21:18:25.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:25 vm10 bash[23387]: audit 2026-03-09T21:18:24.202778+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.107:0/4177182562' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T21:18:25.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:25 vm10 bash[23387]: audit 2026-03-09T21:18:24.203671+0000 mon.a (mon.0) 818 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T21:18:25.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:25 vm10 bash[23387]: audit 2026-03-09T21:18:24.203671+0000 mon.a (mon.0) 818 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T21:18:25.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:25 vm10 bash[23387]: audit 2026-03-09T21:18:24.633592+0000 mon.c (mon.2) 67 : audit [DBG] from='client.? 192.168.123.107:0/762255262' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T21:18:25.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:25 vm10 bash[23387]: audit 2026-03-09T21:18:24.633592+0000 mon.c (mon.2) 67 : audit [DBG] from='client.? 192.168.123.107:0/762255262' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T21:18:25.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:25 vm07 bash[20771]: audit 2026-03-09T21:18:24.120594+0000 mon.a (mon.0) 816 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "rbd"}]': finished 2026-03-09T21:18:25.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:25 vm07 bash[20771]: audit 2026-03-09T21:18:24.120594+0000 mon.a (mon.0) 816 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "rbd"}]': finished 2026-03-09T21:18:25.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:25 vm07 bash[20771]: cluster 2026-03-09T21:18:24.130756+0000 mon.a (mon.0) 817 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-09T21:18:25.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:25 vm07 bash[20771]: cluster 2026-03-09T21:18:24.130756+0000 mon.a (mon.0) 817 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-09T21:18:25.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:25 vm07 bash[20771]: audit 2026-03-09T21:18:24.202778+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.107:0/4177182562' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T21:18:25.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:25 vm07 bash[20771]: audit 2026-03-09T21:18:24.202778+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.107:0/4177182562' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T21:18:25.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:25 vm07 bash[20771]: audit 2026-03-09T21:18:24.203671+0000 mon.a (mon.0) 818 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T21:18:25.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:25 vm07 bash[20771]: audit 2026-03-09T21:18:24.203671+0000 mon.a (mon.0) 818 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T21:18:25.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:25 vm07 bash[20771]: audit 2026-03-09T21:18:24.633592+0000 mon.c (mon.2) 67 : audit [DBG] from='client.? 192.168.123.107:0/762255262' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T21:18:25.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:25 vm07 bash[20771]: audit 2026-03-09T21:18:24.633592+0000 mon.c (mon.2) 67 : audit [DBG] from='client.? 192.168.123.107:0/762255262' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T21:18:25.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:25 vm07 bash[28052]: audit 2026-03-09T21:18:24.120594+0000 mon.a (mon.0) 816 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "rbd"}]': finished 2026-03-09T21:18:25.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:25 vm07 bash[28052]: audit 2026-03-09T21:18:24.120594+0000 mon.a (mon.0) 816 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "rbd"}]': finished 2026-03-09T21:18:25.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:25 vm07 bash[28052]: cluster 2026-03-09T21:18:24.130756+0000 mon.a (mon.0) 817 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-09T21:18:25.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:25 vm07 bash[28052]: cluster 2026-03-09T21:18:24.130756+0000 mon.a (mon.0) 817 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-09T21:18:25.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:25 vm07 bash[28052]: audit 2026-03-09T21:18:24.202778+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.107:0/4177182562' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T21:18:25.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:25 vm07 bash[28052]: audit 2026-03-09T21:18:24.202778+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.107:0/4177182562' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T21:18:25.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:25 vm07 bash[28052]: audit 2026-03-09T21:18:24.203671+0000 mon.a (mon.0) 818 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T21:18:25.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:25 vm07 bash[28052]: audit 2026-03-09T21:18:24.203671+0000 mon.a (mon.0) 818 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T21:18:25.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:25 vm07 bash[28052]: audit 2026-03-09T21:18:24.633592+0000 mon.c (mon.2) 67 : audit [DBG] from='client.? 192.168.123.107:0/762255262' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T21:18:25.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:25 vm07 bash[28052]: audit 2026-03-09T21:18:24.633592+0000 mon.c (mon.2) 67 : audit [DBG] from='client.? 192.168.123.107:0/762255262' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T21:18:26.197 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_create PASSED [ 12%] 2026-03-09T21:18:26.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:26 vm07 bash[20771]: cluster 2026-03-09T21:18:25.134804+0000 mon.a (mon.0) 819 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-09T21:18:26.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:26 vm07 bash[20771]: cluster 2026-03-09T21:18:25.134804+0000 mon.a (mon.0) 819 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-09T21:18:26.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:26 vm07 bash[20771]: cluster 2026-03-09T21:18:25.742409+0000 mgr.y (mgr.24416) 98 : cluster [DBG] pgmap v59: 196 pgs: 64 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:26.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:26 vm07 bash[20771]: cluster 2026-03-09T21:18:25.742409+0000 mgr.y (mgr.24416) 98 : cluster [DBG] pgmap v59: 196 pgs: 64 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:26.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:26 vm07 bash[28052]: cluster 2026-03-09T21:18:25.134804+0000 mon.a (mon.0) 819 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-09T21:18:26.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:26 vm07 bash[28052]: cluster 2026-03-09T21:18:25.134804+0000 mon.a (mon.0) 819 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-09T21:18:26.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:26 vm07 bash[28052]: cluster 2026-03-09T21:18:25.742409+0000 mgr.y (mgr.24416) 98 : cluster [DBG] pgmap v59: 196 pgs: 64 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:26.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:26 vm07 bash[28052]: cluster 2026-03-09T21:18:25.742409+0000 mgr.y (mgr.24416) 98 : cluster [DBG] pgmap v59: 196 pgs: 64 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:26 vm10 bash[23387]: cluster 2026-03-09T21:18:25.134804+0000 mon.a (mon.0) 819 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-09T21:18:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:26 vm10 bash[23387]: cluster 2026-03-09T21:18:25.134804+0000 mon.a (mon.0) 819 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-09T21:18:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:26 vm10 bash[23387]: cluster 2026-03-09T21:18:25.742409+0000 mgr.y (mgr.24416) 98 : cluster [DBG] pgmap v59: 196 pgs: 64 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:26 vm10 bash[23387]: cluster 2026-03-09T21:18:25.742409+0000 mgr.y (mgr.24416) 98 : cluster [DBG] pgmap v59: 196 pgs: 64 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:26.692 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:18:26 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:18:27.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:27 vm07 bash[20771]: cluster 2026-03-09T21:18:26.173467+0000 mon.a (mon.0) 820 : cluster [WRN] Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:18:27.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:27 vm07 bash[20771]: cluster 2026-03-09T21:18:26.173467+0000 mon.a (mon.0) 820 : cluster [WRN] Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:18:27.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:27 vm07 bash[20771]: cluster 2026-03-09T21:18:26.189061+0000 mon.a (mon.0) 821 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-09T21:18:27.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:27 vm07 bash[20771]: cluster 2026-03-09T21:18:26.189061+0000 mon.a (mon.0) 821 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-09T21:18:27.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:27 vm07 bash[20771]: audit 2026-03-09T21:18:26.251301+0000 mgr.y (mgr.24416) 99 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:27.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:27 vm07 bash[20771]: audit 2026-03-09T21:18:26.251301+0000 mgr.y (mgr.24416) 99 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:27.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:27 vm07 bash[20771]: audit 2026-03-09T21:18:26.916568+0000 mon.a (mon.0) 822 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:27.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:27 vm07 bash[20771]: audit 2026-03-09T21:18:26.916568+0000 mon.a (mon.0) 822 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:27.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:27 vm07 bash[20771]: audit 2026-03-09T21:18:26.920058+0000 mon.c (mon.2) 68 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:18:27.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:27 vm07 bash[20771]: audit 2026-03-09T21:18:26.920058+0000 mon.c (mon.2) 68 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:18:27.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:27 vm07 bash[28052]: cluster 2026-03-09T21:18:26.173467+0000 mon.a (mon.0) 820 : cluster [WRN] Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:18:27.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:27 vm07 bash[28052]: cluster 2026-03-09T21:18:26.173467+0000 mon.a (mon.0) 820 : cluster [WRN] Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:18:27.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:27 vm07 bash[28052]: cluster 2026-03-09T21:18:26.189061+0000 mon.a (mon.0) 821 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-09T21:18:27.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:27 vm07 bash[28052]: cluster 2026-03-09T21:18:26.189061+0000 mon.a (mon.0) 821 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-09T21:18:27.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:27 vm07 bash[28052]: audit 2026-03-09T21:18:26.251301+0000 mgr.y (mgr.24416) 99 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:27.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:27 vm07 bash[28052]: audit 2026-03-09T21:18:26.251301+0000 mgr.y (mgr.24416) 99 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:27.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:27 vm07 bash[28052]: audit 2026-03-09T21:18:26.916568+0000 mon.a (mon.0) 822 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:27.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:27 vm07 bash[28052]: audit 2026-03-09T21:18:26.916568+0000 mon.a (mon.0) 822 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:27.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:27 vm07 bash[28052]: audit 2026-03-09T21:18:26.920058+0000 mon.c (mon.2) 68 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:18:27.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:27 vm07 bash[28052]: audit 2026-03-09T21:18:26.920058+0000 mon.c (mon.2) 68 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:18:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:27 vm10 bash[23387]: cluster 2026-03-09T21:18:26.173467+0000 mon.a (mon.0) 820 : cluster [WRN] Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:18:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:27 vm10 bash[23387]: cluster 2026-03-09T21:18:26.173467+0000 mon.a (mon.0) 820 : cluster [WRN] Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:18:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:27 vm10 bash[23387]: cluster 2026-03-09T21:18:26.189061+0000 mon.a (mon.0) 821 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-09T21:18:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:27 vm10 bash[23387]: cluster 2026-03-09T21:18:26.189061+0000 mon.a (mon.0) 821 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-09T21:18:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:27 vm10 bash[23387]: audit 2026-03-09T21:18:26.251301+0000 mgr.y (mgr.24416) 99 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:27 vm10 bash[23387]: audit 2026-03-09T21:18:26.251301+0000 mgr.y (mgr.24416) 99 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:27 vm10 bash[23387]: audit 2026-03-09T21:18:26.916568+0000 mon.a (mon.0) 822 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:27 vm10 bash[23387]: audit 2026-03-09T21:18:26.916568+0000 mon.a (mon.0) 822 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:18:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:27 vm10 bash[23387]: audit 2026-03-09T21:18:26.920058+0000 mon.c (mon.2) 68 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:18:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:27 vm10 bash[23387]: audit 2026-03-09T21:18:26.920058+0000 mon.c (mon.2) 68 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:18:28.248 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_create_utf8 PASSED [ 13%] 2026-03-09T21:18:28.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:28 vm07 bash[20771]: cluster 2026-03-09T21:18:27.254857+0000 mon.a (mon.0) 823 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-09T21:18:28.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:28 vm07 bash[20771]: cluster 2026-03-09T21:18:27.254857+0000 mon.a (mon.0) 823 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-09T21:18:28.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:28 vm07 bash[20771]: cluster 2026-03-09T21:18:27.743116+0000 mgr.y (mgr.24416) 100 : cluster [DBG] pgmap v62: 196 pgs: 11 creating+peering, 43 unknown, 142 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:18:28.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:28 vm07 bash[20771]: cluster 2026-03-09T21:18:27.743116+0000 mgr.y (mgr.24416) 100 : cluster [DBG] pgmap v62: 196 pgs: 11 creating+peering, 43 unknown, 142 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:18:28.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:28 vm07 bash[28052]: cluster 2026-03-09T21:18:27.254857+0000 mon.a (mon.0) 823 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-09T21:18:28.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:28 vm07 bash[28052]: cluster 2026-03-09T21:18:27.254857+0000 mon.a (mon.0) 823 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-09T21:18:28.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:28 vm07 bash[28052]: cluster 2026-03-09T21:18:27.743116+0000 mgr.y (mgr.24416) 100 : cluster [DBG] pgmap v62: 196 pgs: 11 creating+peering, 43 unknown, 142 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:18:28.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:28 vm07 bash[28052]: cluster 2026-03-09T21:18:27.743116+0000 mgr.y (mgr.24416) 100 : cluster [DBG] pgmap v62: 196 pgs: 11 creating+peering, 43 unknown, 142 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:18:28.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:28 vm10 bash[23387]: cluster 2026-03-09T21:18:27.254857+0000 mon.a (mon.0) 823 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-09T21:18:28.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:28 vm10 bash[23387]: cluster 2026-03-09T21:18:27.254857+0000 mon.a (mon.0) 823 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-09T21:18:28.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:28 vm10 bash[23387]: cluster 2026-03-09T21:18:27.743116+0000 mgr.y (mgr.24416) 100 : cluster [DBG] pgmap v62: 196 pgs: 11 creating+peering, 43 unknown, 142 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:18:28.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:28 vm10 bash[23387]: cluster 2026-03-09T21:18:27.743116+0000 mgr.y (mgr.24416) 100 : cluster [DBG] pgmap v62: 196 pgs: 11 creating+peering, 43 unknown, 142 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:18:29.115 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:18:28 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:18:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:18:29.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:29 vm07 bash[20771]: cluster 2026-03-09T21:18:28.258028+0000 mon.a (mon.0) 824 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-09T21:18:29.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:29 vm07 bash[20771]: cluster 2026-03-09T21:18:28.258028+0000 mon.a (mon.0) 824 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-09T21:18:29.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:29 vm07 bash[28052]: cluster 2026-03-09T21:18:28.258028+0000 mon.a (mon.0) 824 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-09T21:18:29.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:29 vm07 bash[28052]: cluster 2026-03-09T21:18:28.258028+0000 mon.a (mon.0) 824 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-09T21:18:29.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:29 vm10 bash[23387]: cluster 2026-03-09T21:18:28.258028+0000 mon.a (mon.0) 824 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-09T21:18:29.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:29 vm10 bash[23387]: cluster 2026-03-09T21:18:28.258028+0000 mon.a (mon.0) 824 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-09T21:18:30.323 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_pool_lookup_utf8 PASSED [ 14%] 2026-03-09T21:18:30.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:30 vm07 bash[20771]: cluster 2026-03-09T21:18:29.309934+0000 mon.a (mon.0) 825 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-09T21:18:30.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:30 vm07 bash[20771]: cluster 2026-03-09T21:18:29.309934+0000 mon.a (mon.0) 825 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-09T21:18:30.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:30 vm07 bash[20771]: cluster 2026-03-09T21:18:29.743549+0000 mgr.y (mgr.24416) 101 : cluster [DBG] pgmap v65: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:30.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:30 vm07 bash[20771]: cluster 2026-03-09T21:18:29.743549+0000 mgr.y (mgr.24416) 101 : cluster [DBG] pgmap v65: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:30.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:30 vm07 bash[20771]: cluster 2026-03-09T21:18:30.320302+0000 mon.a (mon.0) 826 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-09T21:18:30.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:30 vm07 bash[20771]: cluster 2026-03-09T21:18:30.320302+0000 mon.a (mon.0) 826 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-09T21:18:30.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:30 vm07 bash[28052]: cluster 2026-03-09T21:18:29.309934+0000 mon.a (mon.0) 825 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-09T21:18:30.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:30 vm07 bash[28052]: cluster 2026-03-09T21:18:29.309934+0000 mon.a (mon.0) 825 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-09T21:18:30.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:30 vm07 bash[28052]: cluster 2026-03-09T21:18:29.743549+0000 mgr.y (mgr.24416) 101 : cluster [DBG] pgmap v65: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:30.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:30 vm07 bash[28052]: cluster 2026-03-09T21:18:29.743549+0000 mgr.y (mgr.24416) 101 : cluster [DBG] pgmap v65: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:30.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:30 vm07 bash[28052]: cluster 2026-03-09T21:18:30.320302+0000 mon.a (mon.0) 826 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-09T21:18:30.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:30 vm07 bash[28052]: cluster 2026-03-09T21:18:30.320302+0000 mon.a (mon.0) 826 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-09T21:18:30.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:30 vm10 bash[23387]: cluster 2026-03-09T21:18:29.309934+0000 mon.a (mon.0) 825 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-09T21:18:30.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:30 vm10 bash[23387]: cluster 2026-03-09T21:18:29.309934+0000 mon.a (mon.0) 825 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-09T21:18:30.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:30 vm10 bash[23387]: cluster 2026-03-09T21:18:29.743549+0000 mgr.y (mgr.24416) 101 : cluster [DBG] pgmap v65: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:30.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:30 vm10 bash[23387]: cluster 2026-03-09T21:18:29.743549+0000 mgr.y (mgr.24416) 101 : cluster [DBG] pgmap v65: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:30.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:30 vm10 bash[23387]: cluster 2026-03-09T21:18:30.320302+0000 mon.a (mon.0) 826 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-09T21:18:30.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:30 vm10 bash[23387]: cluster 2026-03-09T21:18:30.320302+0000 mon.a (mon.0) 826 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-09T21:18:32.524 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_eexist PASSED [ 15%] 2026-03-09T21:18:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:32 vm07 bash[20771]: cluster 2026-03-09T21:18:31.347436+0000 mon.a (mon.0) 827 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-09T21:18:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:32 vm07 bash[20771]: cluster 2026-03-09T21:18:31.347436+0000 mon.a (mon.0) 827 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-09T21:18:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:32 vm07 bash[20771]: cluster 2026-03-09T21:18:31.743925+0000 mgr.y (mgr.24416) 102 : cluster [DBG] pgmap v68: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:32 vm07 bash[20771]: cluster 2026-03-09T21:18:31.743925+0000 mgr.y (mgr.24416) 102 : cluster [DBG] pgmap v68: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:32.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:32 vm07 bash[28052]: cluster 2026-03-09T21:18:31.347436+0000 mon.a (mon.0) 827 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-09T21:18:32.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:32 vm07 bash[28052]: cluster 2026-03-09T21:18:31.347436+0000 mon.a (mon.0) 827 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-09T21:18:32.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:32 vm07 bash[28052]: cluster 2026-03-09T21:18:31.743925+0000 mgr.y (mgr.24416) 102 : cluster [DBG] pgmap v68: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:32.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:32 vm07 bash[28052]: cluster 2026-03-09T21:18:31.743925+0000 mgr.y (mgr.24416) 102 : cluster [DBG] pgmap v68: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:32 vm10 bash[23387]: cluster 2026-03-09T21:18:31.347436+0000 mon.a (mon.0) 827 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-09T21:18:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:32 vm10 bash[23387]: cluster 2026-03-09T21:18:31.347436+0000 mon.a (mon.0) 827 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-09T21:18:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:32 vm10 bash[23387]: cluster 2026-03-09T21:18:31.743925+0000 mgr.y (mgr.24416) 102 : cluster [DBG] pgmap v68: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:32 vm10 bash[23387]: cluster 2026-03-09T21:18:31.743925+0000 mgr.y (mgr.24416) 102 : cluster [DBG] pgmap v68: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:33.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:33 vm07 bash[20771]: cluster 2026-03-09T21:18:32.481748+0000 mon.a (mon.0) 828 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-09T21:18:33.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:33 vm07 bash[20771]: cluster 2026-03-09T21:18:32.481748+0000 mon.a (mon.0) 828 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-09T21:18:33.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:33 vm07 bash[20771]: cluster 2026-03-09T21:18:33.458024+0000 mon.a (mon.0) 829 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-09T21:18:33.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:33 vm07 bash[20771]: cluster 2026-03-09T21:18:33.458024+0000 mon.a (mon.0) 829 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-09T21:18:33.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:33 vm07 bash[28052]: cluster 2026-03-09T21:18:32.481748+0000 mon.a (mon.0) 828 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-09T21:18:33.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:33 vm07 bash[28052]: cluster 2026-03-09T21:18:32.481748+0000 mon.a (mon.0) 828 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-09T21:18:33.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:33 vm07 bash[28052]: cluster 2026-03-09T21:18:33.458024+0000 mon.a (mon.0) 829 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-09T21:18:33.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:33 vm07 bash[28052]: cluster 2026-03-09T21:18:33.458024+0000 mon.a (mon.0) 829 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-09T21:18:33.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:33 vm10 bash[23387]: cluster 2026-03-09T21:18:32.481748+0000 mon.a (mon.0) 828 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-09T21:18:33.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:33 vm10 bash[23387]: cluster 2026-03-09T21:18:32.481748+0000 mon.a (mon.0) 828 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-09T21:18:33.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:33 vm10 bash[23387]: cluster 2026-03-09T21:18:33.458024+0000 mon.a (mon.0) 829 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-09T21:18:33.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:33 vm10 bash[23387]: cluster 2026-03-09T21:18:33.458024+0000 mon.a (mon.0) 829 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-09T21:18:34.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:34 vm07 bash[20771]: cluster 2026-03-09T21:18:33.744425+0000 mgr.y (mgr.24416) 103 : cluster [DBG] pgmap v71: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:34.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:34 vm07 bash[20771]: cluster 2026-03-09T21:18:33.744425+0000 mgr.y (mgr.24416) 103 : cluster [DBG] pgmap v71: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:34.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:34 vm07 bash[20771]: cluster 2026-03-09T21:18:34.476461+0000 mon.a (mon.0) 830 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-09T21:18:34.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:34 vm07 bash[20771]: cluster 2026-03-09T21:18:34.476461+0000 mon.a (mon.0) 830 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-09T21:18:34.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:34 vm07 bash[28052]: cluster 2026-03-09T21:18:33.744425+0000 mgr.y (mgr.24416) 103 : cluster [DBG] pgmap v71: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:34.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:34 vm07 bash[28052]: cluster 2026-03-09T21:18:33.744425+0000 mgr.y (mgr.24416) 103 : cluster [DBG] pgmap v71: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:34.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:34 vm07 bash[28052]: cluster 2026-03-09T21:18:34.476461+0000 mon.a (mon.0) 830 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-09T21:18:34.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:34 vm07 bash[28052]: cluster 2026-03-09T21:18:34.476461+0000 mon.a (mon.0) 830 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-09T21:18:34.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:34 vm10 bash[23387]: cluster 2026-03-09T21:18:33.744425+0000 mgr.y (mgr.24416) 103 : cluster [DBG] pgmap v71: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:34.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:34 vm10 bash[23387]: cluster 2026-03-09T21:18:33.744425+0000 mgr.y (mgr.24416) 103 : cluster [DBG] pgmap v71: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:34.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:34 vm10 bash[23387]: cluster 2026-03-09T21:18:34.476461+0000 mon.a (mon.0) 830 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-09T21:18:34.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:34 vm10 bash[23387]: cluster 2026-03-09T21:18:34.476461+0000 mon.a (mon.0) 830 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-09T21:18:36.609 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:18:36 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:18:36.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:36 vm07 bash[20771]: cluster 2026-03-09T21:18:35.604149+0000 mon.a (mon.0) 831 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-09T21:18:36.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:36 vm07 bash[20771]: cluster 2026-03-09T21:18:35.604149+0000 mon.a (mon.0) 831 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-09T21:18:36.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:36 vm07 bash[20771]: cluster 2026-03-09T21:18:35.744786+0000 mgr.y (mgr.24416) 104 : cluster [DBG] pgmap v74: 260 pgs: 96 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:36.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:36 vm07 bash[20771]: cluster 2026-03-09T21:18:35.744786+0000 mgr.y (mgr.24416) 104 : cluster [DBG] pgmap v74: 260 pgs: 96 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:36.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:36 vm07 bash[28052]: cluster 2026-03-09T21:18:35.604149+0000 mon.a (mon.0) 831 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-09T21:18:36.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:36 vm07 bash[28052]: cluster 2026-03-09T21:18:35.604149+0000 mon.a (mon.0) 831 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-09T21:18:36.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:36 vm07 bash[28052]: cluster 2026-03-09T21:18:35.744786+0000 mgr.y (mgr.24416) 104 : cluster [DBG] pgmap v74: 260 pgs: 96 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:36.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:36 vm07 bash[28052]: cluster 2026-03-09T21:18:35.744786+0000 mgr.y (mgr.24416) 104 : cluster [DBG] pgmap v74: 260 pgs: 96 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:36.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:36 vm10 bash[23387]: cluster 2026-03-09T21:18:35.604149+0000 mon.a (mon.0) 831 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-09T21:18:36.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:36 vm10 bash[23387]: cluster 2026-03-09T21:18:35.604149+0000 mon.a (mon.0) 831 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-09T21:18:36.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:36 vm10 bash[23387]: cluster 2026-03-09T21:18:35.744786+0000 mgr.y (mgr.24416) 104 : cluster [DBG] pgmap v74: 260 pgs: 96 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:36.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:36 vm10 bash[23387]: cluster 2026-03-09T21:18:35.744786+0000 mgr.y (mgr.24416) 104 : cluster [DBG] pgmap v74: 260 pgs: 96 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:37.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:37 vm07 bash[20771]: audit 2026-03-09T21:18:36.258484+0000 mgr.y (mgr.24416) 105 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:37.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:37 vm07 bash[20771]: audit 2026-03-09T21:18:36.258484+0000 mgr.y (mgr.24416) 105 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:37.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:37 vm07 bash[20771]: cluster 2026-03-09T21:18:36.573927+0000 mon.a (mon.0) 832 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:18:37.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:37 vm07 bash[20771]: cluster 2026-03-09T21:18:36.573927+0000 mon.a (mon.0) 832 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:18:37.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:37 vm07 bash[20771]: cluster 2026-03-09T21:18:36.586901+0000 mon.a (mon.0) 833 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-09T21:18:37.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:37 vm07 bash[20771]: cluster 2026-03-09T21:18:36.586901+0000 mon.a (mon.0) 833 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-09T21:18:37.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:37 vm07 bash[20771]: cluster 2026-03-09T21:18:37.581529+0000 mon.a (mon.0) 834 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-09T21:18:37.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:37 vm07 bash[20771]: cluster 2026-03-09T21:18:37.581529+0000 mon.a (mon.0) 834 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-09T21:18:37.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:37 vm07 bash[28052]: audit 2026-03-09T21:18:36.258484+0000 mgr.y (mgr.24416) 105 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:37.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:37 vm07 bash[28052]: audit 2026-03-09T21:18:36.258484+0000 mgr.y (mgr.24416) 105 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:37.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:37 vm07 bash[28052]: cluster 2026-03-09T21:18:36.573927+0000 mon.a (mon.0) 832 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:18:37.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:37 vm07 bash[28052]: cluster 2026-03-09T21:18:36.573927+0000 mon.a (mon.0) 832 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:18:37.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:37 vm07 bash[28052]: cluster 2026-03-09T21:18:36.586901+0000 mon.a (mon.0) 833 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-09T21:18:37.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:37 vm07 bash[28052]: cluster 2026-03-09T21:18:36.586901+0000 mon.a (mon.0) 833 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-09T21:18:37.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:37 vm07 bash[28052]: cluster 2026-03-09T21:18:37.581529+0000 mon.a (mon.0) 834 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-09T21:18:37.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:37 vm07 bash[28052]: cluster 2026-03-09T21:18:37.581529+0000 mon.a (mon.0) 834 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-09T21:18:37.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:37 vm10 bash[23387]: audit 2026-03-09T21:18:36.258484+0000 mgr.y (mgr.24416) 105 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:37.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:37 vm10 bash[23387]: audit 2026-03-09T21:18:36.258484+0000 mgr.y (mgr.24416) 105 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:37.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:37 vm10 bash[23387]: cluster 2026-03-09T21:18:36.573927+0000 mon.a (mon.0) 832 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:18:37.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:37 vm10 bash[23387]: cluster 2026-03-09T21:18:36.573927+0000 mon.a (mon.0) 832 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:18:37.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:37 vm10 bash[23387]: cluster 2026-03-09T21:18:36.586901+0000 mon.a (mon.0) 833 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-09T21:18:37.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:37 vm10 bash[23387]: cluster 2026-03-09T21:18:36.586901+0000 mon.a (mon.0) 833 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-09T21:18:37.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:37 vm10 bash[23387]: cluster 2026-03-09T21:18:37.581529+0000 mon.a (mon.0) 834 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-09T21:18:37.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:37 vm10 bash[23387]: cluster 2026-03-09T21:18:37.581529+0000 mon.a (mon.0) 834 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-09T21:18:38.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:38 vm10 bash[23387]: cluster 2026-03-09T21:18:37.745443+0000 mgr.y (mgr.24416) 106 : cluster [DBG] pgmap v77: 196 pgs: 16 unknown, 180 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:18:38.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:38 vm10 bash[23387]: cluster 2026-03-09T21:18:37.745443+0000 mgr.y (mgr.24416) 106 : cluster [DBG] pgmap v77: 196 pgs: 16 unknown, 180 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:18:38.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:38 vm10 bash[23387]: cluster 2026-03-09T21:18:38.584787+0000 mon.a (mon.0) 835 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-09T21:18:38.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:38 vm10 bash[23387]: cluster 2026-03-09T21:18:38.584787+0000 mon.a (mon.0) 835 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-09T21:18:39.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:38 vm07 bash[20771]: cluster 2026-03-09T21:18:37.745443+0000 mgr.y (mgr.24416) 106 : cluster [DBG] pgmap v77: 196 pgs: 16 unknown, 180 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:18:39.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:38 vm07 bash[20771]: cluster 2026-03-09T21:18:37.745443+0000 mgr.y (mgr.24416) 106 : cluster [DBG] pgmap v77: 196 pgs: 16 unknown, 180 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:18:39.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:38 vm07 bash[20771]: cluster 2026-03-09T21:18:38.584787+0000 mon.a (mon.0) 835 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-09T21:18:39.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:38 vm07 bash[20771]: cluster 2026-03-09T21:18:38.584787+0000 mon.a (mon.0) 835 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-09T21:18:39.116 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:18:38 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:18:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:18:39.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:38 vm07 bash[28052]: cluster 2026-03-09T21:18:37.745443+0000 mgr.y (mgr.24416) 106 : cluster [DBG] pgmap v77: 196 pgs: 16 unknown, 180 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:18:39.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:38 vm07 bash[28052]: cluster 2026-03-09T21:18:37.745443+0000 mgr.y (mgr.24416) 106 : cluster [DBG] pgmap v77: 196 pgs: 16 unknown, 180 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:18:39.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:38 vm07 bash[28052]: cluster 2026-03-09T21:18:38.584787+0000 mon.a (mon.0) 835 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-09T21:18:39.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:38 vm07 bash[28052]: cluster 2026-03-09T21:18:38.584787+0000 mon.a (mon.0) 835 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-09T21:18:40.649 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_list_pools PASSED [ 16%] 2026-03-09T21:18:40.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:40 vm10 bash[23387]: cluster 2026-03-09T21:18:39.640031+0000 mon.a (mon.0) 836 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-09T21:18:40.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:40 vm10 bash[23387]: cluster 2026-03-09T21:18:39.640031+0000 mon.a (mon.0) 836 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-09T21:18:40.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:40 vm10 bash[23387]: cluster 2026-03-09T21:18:39.745841+0000 mgr.y (mgr.24416) 107 : cluster [DBG] pgmap v80: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:40.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:40 vm10 bash[23387]: cluster 2026-03-09T21:18:39.745841+0000 mgr.y (mgr.24416) 107 : cluster [DBG] pgmap v80: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:41.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:40 vm07 bash[20771]: cluster 2026-03-09T21:18:39.640031+0000 mon.a (mon.0) 836 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-09T21:18:41.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:40 vm07 bash[20771]: cluster 2026-03-09T21:18:39.640031+0000 mon.a (mon.0) 836 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-09T21:18:41.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:40 vm07 bash[20771]: cluster 2026-03-09T21:18:39.745841+0000 mgr.y (mgr.24416) 107 : cluster [DBG] pgmap v80: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:41.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:40 vm07 bash[20771]: cluster 2026-03-09T21:18:39.745841+0000 mgr.y (mgr.24416) 107 : cluster [DBG] pgmap v80: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:41.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:40 vm07 bash[28052]: cluster 2026-03-09T21:18:39.640031+0000 mon.a (mon.0) 836 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-09T21:18:41.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:40 vm07 bash[28052]: cluster 2026-03-09T21:18:39.640031+0000 mon.a (mon.0) 836 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-09T21:18:41.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:40 vm07 bash[28052]: cluster 2026-03-09T21:18:39.745841+0000 mgr.y (mgr.24416) 107 : cluster [DBG] pgmap v80: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:41.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:40 vm07 bash[28052]: cluster 2026-03-09T21:18:39.745841+0000 mgr.y (mgr.24416) 107 : cluster [DBG] pgmap v80: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:42.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:41 vm07 bash[20771]: cluster 2026-03-09T21:18:40.641232+0000 mon.a (mon.0) 837 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-09T21:18:42.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:41 vm07 bash[20771]: cluster 2026-03-09T21:18:40.641232+0000 mon.a (mon.0) 837 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-09T21:18:42.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:41 vm07 bash[28052]: cluster 2026-03-09T21:18:40.641232+0000 mon.a (mon.0) 837 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-09T21:18:42.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:41 vm07 bash[28052]: cluster 2026-03-09T21:18:40.641232+0000 mon.a (mon.0) 837 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-09T21:18:42.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:41 vm10 bash[23387]: cluster 2026-03-09T21:18:40.641232+0000 mon.a (mon.0) 837 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-09T21:18:42.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:41 vm10 bash[23387]: cluster 2026-03-09T21:18:40.641232+0000 mon.a (mon.0) 837 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-09T21:18:43.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:42 vm10 bash[23387]: cluster 2026-03-09T21:18:41.746127+0000 mgr.y (mgr.24416) 108 : cluster [DBG] pgmap v82: 164 pgs: 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:43.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:42 vm10 bash[23387]: cluster 2026-03-09T21:18:41.746127+0000 mgr.y (mgr.24416) 108 : cluster [DBG] pgmap v82: 164 pgs: 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:43.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:42 vm10 bash[23387]: cluster 2026-03-09T21:18:41.779638+0000 mon.a (mon.0) 838 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-09T21:18:43.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:42 vm10 bash[23387]: cluster 2026-03-09T21:18:41.779638+0000 mon.a (mon.0) 838 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-09T21:18:43.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:42 vm10 bash[23387]: audit 2026-03-09T21:18:41.926506+0000 mon.c (mon.2) 69 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:18:43.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:42 vm10 bash[23387]: audit 2026-03-09T21:18:41.926506+0000 mon.c (mon.2) 69 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:18:43.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:42 vm07 bash[20771]: cluster 2026-03-09T21:18:41.746127+0000 mgr.y (mgr.24416) 108 : cluster [DBG] pgmap v82: 164 pgs: 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:43.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:42 vm07 bash[20771]: cluster 2026-03-09T21:18:41.746127+0000 mgr.y (mgr.24416) 108 : cluster [DBG] pgmap v82: 164 pgs: 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:43.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:42 vm07 bash[20771]: cluster 2026-03-09T21:18:41.779638+0000 mon.a (mon.0) 838 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-09T21:18:43.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:42 vm07 bash[20771]: cluster 2026-03-09T21:18:41.779638+0000 mon.a (mon.0) 838 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-09T21:18:43.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:42 vm07 bash[20771]: audit 2026-03-09T21:18:41.926506+0000 mon.c (mon.2) 69 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:18:43.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:42 vm07 bash[20771]: audit 2026-03-09T21:18:41.926506+0000 mon.c (mon.2) 69 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:18:43.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:42 vm07 bash[28052]: cluster 2026-03-09T21:18:41.746127+0000 mgr.y (mgr.24416) 108 : cluster [DBG] pgmap v82: 164 pgs: 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:43.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:42 vm07 bash[28052]: cluster 2026-03-09T21:18:41.746127+0000 mgr.y (mgr.24416) 108 : cluster [DBG] pgmap v82: 164 pgs: 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:43.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:42 vm07 bash[28052]: cluster 2026-03-09T21:18:41.779638+0000 mon.a (mon.0) 838 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-09T21:18:43.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:42 vm07 bash[28052]: cluster 2026-03-09T21:18:41.779638+0000 mon.a (mon.0) 838 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-09T21:18:43.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:42 vm07 bash[28052]: audit 2026-03-09T21:18:41.926506+0000 mon.c (mon.2) 69 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:18:43.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:42 vm07 bash[28052]: audit 2026-03-09T21:18:41.926506+0000 mon.c (mon.2) 69 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:18:44.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:43 vm07 bash[20771]: cluster 2026-03-09T21:18:42.906945+0000 mon.a (mon.0) 839 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:18:44.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:43 vm07 bash[20771]: cluster 2026-03-09T21:18:42.906945+0000 mon.a (mon.0) 839 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:18:44.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:43 vm07 bash[20771]: cluster 2026-03-09T21:18:42.959727+0000 mon.a (mon.0) 840 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-09T21:18:44.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:43 vm07 bash[20771]: cluster 2026-03-09T21:18:42.959727+0000 mon.a (mon.0) 840 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-09T21:18:44.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:43 vm07 bash[20771]: audit 2026-03-09T21:18:42.964315+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.107:0/1659860165' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-09T21:18:44.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:43 vm07 bash[20771]: audit 2026-03-09T21:18:42.964315+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.107:0/1659860165' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-09T21:18:44.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:43 vm07 bash[20771]: audit 2026-03-09T21:18:42.971227+0000 mon.a (mon.0) 841 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-09T21:18:44.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:43 vm07 bash[20771]: audit 2026-03-09T21:18:42.971227+0000 mon.a (mon.0) 841 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-09T21:18:44.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:43 vm07 bash[20771]: cluster 2026-03-09T21:18:43.746446+0000 mgr.y (mgr.24416) 109 : cluster [DBG] pgmap v85: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:44.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:43 vm07 bash[20771]: cluster 2026-03-09T21:18:43.746446+0000 mgr.y (mgr.24416) 109 : cluster [DBG] pgmap v85: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:44.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:43 vm07 bash[28052]: cluster 2026-03-09T21:18:42.906945+0000 mon.a (mon.0) 839 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:18:44.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:43 vm07 bash[28052]: cluster 2026-03-09T21:18:42.906945+0000 mon.a (mon.0) 839 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:18:44.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:43 vm07 bash[28052]: cluster 2026-03-09T21:18:42.959727+0000 mon.a (mon.0) 840 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-09T21:18:44.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:43 vm07 bash[28052]: cluster 2026-03-09T21:18:42.959727+0000 mon.a (mon.0) 840 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-09T21:18:44.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:43 vm07 bash[28052]: audit 2026-03-09T21:18:42.964315+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.107:0/1659860165' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-09T21:18:44.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:43 vm07 bash[28052]: audit 2026-03-09T21:18:42.964315+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.107:0/1659860165' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-09T21:18:44.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:43 vm07 bash[28052]: audit 2026-03-09T21:18:42.971227+0000 mon.a (mon.0) 841 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-09T21:18:44.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:43 vm07 bash[28052]: audit 2026-03-09T21:18:42.971227+0000 mon.a (mon.0) 841 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-09T21:18:44.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:43 vm07 bash[28052]: cluster 2026-03-09T21:18:43.746446+0000 mgr.y (mgr.24416) 109 : cluster [DBG] pgmap v85: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:44.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:43 vm07 bash[28052]: cluster 2026-03-09T21:18:43.746446+0000 mgr.y (mgr.24416) 109 : cluster [DBG] pgmap v85: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:43 vm10 bash[23387]: cluster 2026-03-09T21:18:42.906945+0000 mon.a (mon.0) 839 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:18:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:43 vm10 bash[23387]: cluster 2026-03-09T21:18:42.906945+0000 mon.a (mon.0) 839 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:18:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:43 vm10 bash[23387]: cluster 2026-03-09T21:18:42.959727+0000 mon.a (mon.0) 840 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-09T21:18:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:43 vm10 bash[23387]: cluster 2026-03-09T21:18:42.959727+0000 mon.a (mon.0) 840 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-09T21:18:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:43 vm10 bash[23387]: audit 2026-03-09T21:18:42.964315+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.107:0/1659860165' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-09T21:18:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:43 vm10 bash[23387]: audit 2026-03-09T21:18:42.964315+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.107:0/1659860165' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-09T21:18:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:43 vm10 bash[23387]: audit 2026-03-09T21:18:42.971227+0000 mon.a (mon.0) 841 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-09T21:18:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:43 vm10 bash[23387]: audit 2026-03-09T21:18:42.971227+0000 mon.a (mon.0) 841 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-09T21:18:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:43 vm10 bash[23387]: cluster 2026-03-09T21:18:43.746446+0000 mgr.y (mgr.24416) 109 : cluster [DBG] pgmap v85: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:43 vm10 bash[23387]: cluster 2026-03-09T21:18:43.746446+0000 mgr.y (mgr.24416) 109 : cluster [DBG] pgmap v85: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:44 vm07 bash[28052]: audit 2026-03-09T21:18:43.945944+0000 mon.a (mon.0) 842 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]': finished 2026-03-09T21:18:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:44 vm07 bash[28052]: audit 2026-03-09T21:18:43.945944+0000 mon.a (mon.0) 842 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]': finished 2026-03-09T21:18:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:44 vm07 bash[28052]: cluster 2026-03-09T21:18:43.960851+0000 mon.a (mon.0) 843 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-09T21:18:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:44 vm07 bash[28052]: cluster 2026-03-09T21:18:43.960851+0000 mon.a (mon.0) 843 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-09T21:18:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:44 vm07 bash[28052]: audit 2026-03-09T21:18:43.963476+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.107:0/1659860165' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T21:18:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:44 vm07 bash[28052]: audit 2026-03-09T21:18:43.963476+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.107:0/1659860165' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T21:18:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:44 vm07 bash[28052]: audit 2026-03-09T21:18:43.968389+0000 mon.a (mon.0) 844 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T21:18:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:44 vm07 bash[28052]: audit 2026-03-09T21:18:43.968389+0000 mon.a (mon.0) 844 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T21:18:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:44 vm07 bash[20771]: audit 2026-03-09T21:18:43.945944+0000 mon.a (mon.0) 842 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]': finished 2026-03-09T21:18:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:44 vm07 bash[20771]: audit 2026-03-09T21:18:43.945944+0000 mon.a (mon.0) 842 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]': finished 2026-03-09T21:18:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:44 vm07 bash[20771]: cluster 2026-03-09T21:18:43.960851+0000 mon.a (mon.0) 843 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-09T21:18:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:44 vm07 bash[20771]: cluster 2026-03-09T21:18:43.960851+0000 mon.a (mon.0) 843 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-09T21:18:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:44 vm07 bash[20771]: audit 2026-03-09T21:18:43.963476+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.107:0/1659860165' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T21:18:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:44 vm07 bash[20771]: audit 2026-03-09T21:18:43.963476+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.107:0/1659860165' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T21:18:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:44 vm07 bash[20771]: audit 2026-03-09T21:18:43.968389+0000 mon.a (mon.0) 844 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T21:18:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:44 vm07 bash[20771]: audit 2026-03-09T21:18:43.968389+0000 mon.a (mon.0) 844 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T21:18:45.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:44 vm10 bash[23387]: audit 2026-03-09T21:18:43.945944+0000 mon.a (mon.0) 842 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]': finished 2026-03-09T21:18:45.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:44 vm10 bash[23387]: audit 2026-03-09T21:18:43.945944+0000 mon.a (mon.0) 842 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]': finished 2026-03-09T21:18:45.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:44 vm10 bash[23387]: cluster 2026-03-09T21:18:43.960851+0000 mon.a (mon.0) 843 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-09T21:18:45.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:44 vm10 bash[23387]: cluster 2026-03-09T21:18:43.960851+0000 mon.a (mon.0) 843 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-09T21:18:45.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:44 vm10 bash[23387]: audit 2026-03-09T21:18:43.963476+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.107:0/1659860165' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T21:18:45.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:44 vm10 bash[23387]: audit 2026-03-09T21:18:43.963476+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.107:0/1659860165' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T21:18:45.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:44 vm10 bash[23387]: audit 2026-03-09T21:18:43.968389+0000 mon.a (mon.0) 844 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T21:18:45.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:44 vm10 bash[23387]: audit 2026-03-09T21:18:43.968389+0000 mon.a (mon.0) 844 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T21:18:46.268 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:45 vm10 bash[23387]: audit 2026-03-09T21:18:44.987771+0000 mon.a (mon.0) 845 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]': finished 2026-03-09T21:18:46.269 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:45 vm10 bash[23387]: audit 2026-03-09T21:18:44.987771+0000 mon.a (mon.0) 845 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]': finished 2026-03-09T21:18:46.269 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:45 vm10 bash[23387]: cluster 2026-03-09T21:18:44.992764+0000 mon.a (mon.0) 846 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-09T21:18:46.269 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:45 vm10 bash[23387]: cluster 2026-03-09T21:18:44.992764+0000 mon.a (mon.0) 846 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-09T21:18:46.269 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:45 vm10 bash[23387]: audit 2026-03-09T21:18:44.996732+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.107:0/1659860165' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-09T21:18:46.269 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:45 vm10 bash[23387]: audit 2026-03-09T21:18:44.996732+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.107:0/1659860165' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-09T21:18:46.269 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:45 vm10 bash[23387]: audit 2026-03-09T21:18:44.997257+0000 mon.a (mon.0) 847 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-09T21:18:46.269 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:45 vm10 bash[23387]: audit 2026-03-09T21:18:44.997257+0000 mon.a (mon.0) 847 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-09T21:18:46.269 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:45 vm10 bash[23387]: cluster 2026-03-09T21:18:45.746773+0000 mgr.y (mgr.24416) 110 : cluster [DBG] pgmap v88: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:46.269 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:45 vm10 bash[23387]: cluster 2026-03-09T21:18:45.746773+0000 mgr.y (mgr.24416) 110 : cluster [DBG] pgmap v88: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:45 vm07 bash[20771]: audit 2026-03-09T21:18:44.987771+0000 mon.a (mon.0) 845 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]': finished 2026-03-09T21:18:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:45 vm07 bash[20771]: audit 2026-03-09T21:18:44.987771+0000 mon.a (mon.0) 845 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]': finished 2026-03-09T21:18:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:45 vm07 bash[20771]: cluster 2026-03-09T21:18:44.992764+0000 mon.a (mon.0) 846 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-09T21:18:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:45 vm07 bash[20771]: cluster 2026-03-09T21:18:44.992764+0000 mon.a (mon.0) 846 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-09T21:18:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:45 vm07 bash[20771]: audit 2026-03-09T21:18:44.996732+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.107:0/1659860165' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-09T21:18:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:45 vm07 bash[20771]: audit 2026-03-09T21:18:44.996732+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.107:0/1659860165' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-09T21:18:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:45 vm07 bash[20771]: audit 2026-03-09T21:18:44.997257+0000 mon.a (mon.0) 847 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-09T21:18:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:45 vm07 bash[20771]: audit 2026-03-09T21:18:44.997257+0000 mon.a (mon.0) 847 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-09T21:18:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:45 vm07 bash[20771]: cluster 2026-03-09T21:18:45.746773+0000 mgr.y (mgr.24416) 110 : cluster [DBG] pgmap v88: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:45 vm07 bash[20771]: cluster 2026-03-09T21:18:45.746773+0000 mgr.y (mgr.24416) 110 : cluster [DBG] pgmap v88: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:46.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:45 vm07 bash[28052]: audit 2026-03-09T21:18:44.987771+0000 mon.a (mon.0) 845 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]': finished 2026-03-09T21:18:46.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:45 vm07 bash[28052]: audit 2026-03-09T21:18:44.987771+0000 mon.a (mon.0) 845 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]': finished 2026-03-09T21:18:46.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:45 vm07 bash[28052]: cluster 2026-03-09T21:18:44.992764+0000 mon.a (mon.0) 846 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-09T21:18:46.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:45 vm07 bash[28052]: cluster 2026-03-09T21:18:44.992764+0000 mon.a (mon.0) 846 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-09T21:18:46.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:45 vm07 bash[28052]: audit 2026-03-09T21:18:44.996732+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.107:0/1659860165' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-09T21:18:46.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:45 vm07 bash[28052]: audit 2026-03-09T21:18:44.996732+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.107:0/1659860165' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-09T21:18:46.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:45 vm07 bash[28052]: audit 2026-03-09T21:18:44.997257+0000 mon.a (mon.0) 847 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-09T21:18:46.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:45 vm07 bash[28052]: audit 2026-03-09T21:18:44.997257+0000 mon.a (mon.0) 847 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-09T21:18:46.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:45 vm07 bash[28052]: cluster 2026-03-09T21:18:45.746773+0000 mgr.y (mgr.24416) 110 : cluster [DBG] pgmap v88: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:46.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:45 vm07 bash[28052]: cluster 2026-03-09T21:18:45.746773+0000 mgr.y (mgr.24416) 110 : cluster [DBG] pgmap v88: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:46.692 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:18:46 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:18:47.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:47 vm10 bash[23387]: audit 2026-03-09T21:18:46.001523+0000 mon.a (mon.0) 848 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]': finished 2026-03-09T21:18:47.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:47 vm10 bash[23387]: audit 2026-03-09T21:18:46.001523+0000 mon.a (mon.0) 848 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]': finished 2026-03-09T21:18:47.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:47 vm10 bash[23387]: cluster 2026-03-09T21:18:46.007175+0000 mon.a (mon.0) 849 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-09T21:18:47.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:47 vm10 bash[23387]: cluster 2026-03-09T21:18:46.007175+0000 mon.a (mon.0) 849 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-09T21:18:47.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:47 vm10 bash[23387]: audit 2026-03-09T21:18:46.268541+0000 mgr.y (mgr.24416) 111 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:47.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:47 vm10 bash[23387]: audit 2026-03-09T21:18:46.268541+0000 mgr.y (mgr.24416) 111 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:47.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:47 vm07 bash[20771]: audit 2026-03-09T21:18:46.001523+0000 mon.a (mon.0) 848 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]': finished 2026-03-09T21:18:47.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:47 vm07 bash[20771]: audit 2026-03-09T21:18:46.001523+0000 mon.a (mon.0) 848 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]': finished 2026-03-09T21:18:47.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:47 vm07 bash[20771]: cluster 2026-03-09T21:18:46.007175+0000 mon.a (mon.0) 849 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-09T21:18:47.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:47 vm07 bash[20771]: cluster 2026-03-09T21:18:46.007175+0000 mon.a (mon.0) 849 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-09T21:18:47.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:47 vm07 bash[20771]: audit 2026-03-09T21:18:46.268541+0000 mgr.y (mgr.24416) 111 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:47.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:47 vm07 bash[20771]: audit 2026-03-09T21:18:46.268541+0000 mgr.y (mgr.24416) 111 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:47.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:47 vm07 bash[28052]: audit 2026-03-09T21:18:46.001523+0000 mon.a (mon.0) 848 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]': finished 2026-03-09T21:18:47.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:47 vm07 bash[28052]: audit 2026-03-09T21:18:46.001523+0000 mon.a (mon.0) 848 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]': finished 2026-03-09T21:18:47.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:47 vm07 bash[28052]: cluster 2026-03-09T21:18:46.007175+0000 mon.a (mon.0) 849 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-09T21:18:47.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:47 vm07 bash[28052]: cluster 2026-03-09T21:18:46.007175+0000 mon.a (mon.0) 849 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-09T21:18:47.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:47 vm07 bash[28052]: audit 2026-03-09T21:18:46.268541+0000 mgr.y (mgr.24416) 111 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:47.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:47 vm07 bash[28052]: audit 2026-03-09T21:18:46.268541+0000 mgr.y (mgr.24416) 111 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:48.140 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_get_pool_base_tier PASSED [ 17%] 2026-03-09T21:18:48.157 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_get_fsid PASSED [ 18%] 2026-03-09T21:18:48.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:48 vm10 bash[23387]: cluster 2026-03-09T21:18:47.121894+0000 mon.a (mon.0) 850 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T21:18:48.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:48 vm10 bash[23387]: cluster 2026-03-09T21:18:47.121894+0000 mon.a (mon.0) 850 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T21:18:48.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:48 vm10 bash[23387]: cluster 2026-03-09T21:18:47.747402+0000 mgr.y (mgr.24416) 112 : cluster [DBG] pgmap v91: 196 pgs: 8 creating+peering, 188 active+clean; 455 KiB data, 219 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:18:48.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:48 vm10 bash[23387]: cluster 2026-03-09T21:18:47.747402+0000 mgr.y (mgr.24416) 112 : cluster [DBG] pgmap v91: 196 pgs: 8 creating+peering, 188 active+clean; 455 KiB data, 219 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:18:48.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:48 vm10 bash[23387]: cluster 2026-03-09T21:18:48.138959+0000 mon.a (mon.0) 851 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T21:18:48.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:48 vm10 bash[23387]: cluster 2026-03-09T21:18:48.138959+0000 mon.a (mon.0) 851 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T21:18:48.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:48 vm07 bash[20771]: cluster 2026-03-09T21:18:47.121894+0000 mon.a (mon.0) 850 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T21:18:48.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:48 vm07 bash[20771]: cluster 2026-03-09T21:18:47.121894+0000 mon.a (mon.0) 850 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T21:18:48.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:48 vm07 bash[20771]: cluster 2026-03-09T21:18:47.747402+0000 mgr.y (mgr.24416) 112 : cluster [DBG] pgmap v91: 196 pgs: 8 creating+peering, 188 active+clean; 455 KiB data, 219 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:18:48.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:48 vm07 bash[20771]: cluster 2026-03-09T21:18:47.747402+0000 mgr.y (mgr.24416) 112 : cluster [DBG] pgmap v91: 196 pgs: 8 creating+peering, 188 active+clean; 455 KiB data, 219 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:18:48.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:48 vm07 bash[20771]: cluster 2026-03-09T21:18:48.138959+0000 mon.a (mon.0) 851 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T21:18:48.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:48 vm07 bash[20771]: cluster 2026-03-09T21:18:48.138959+0000 mon.a (mon.0) 851 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T21:18:48.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:48 vm07 bash[28052]: cluster 2026-03-09T21:18:47.121894+0000 mon.a (mon.0) 850 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T21:18:48.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:48 vm07 bash[28052]: cluster 2026-03-09T21:18:47.121894+0000 mon.a (mon.0) 850 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T21:18:48.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:48 vm07 bash[28052]: cluster 2026-03-09T21:18:47.747402+0000 mgr.y (mgr.24416) 112 : cluster [DBG] pgmap v91: 196 pgs: 8 creating+peering, 188 active+clean; 455 KiB data, 219 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:18:48.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:48 vm07 bash[28052]: cluster 2026-03-09T21:18:47.747402+0000 mgr.y (mgr.24416) 112 : cluster [DBG] pgmap v91: 196 pgs: 8 creating+peering, 188 active+clean; 455 KiB data, 219 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:18:48.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:48 vm07 bash[28052]: cluster 2026-03-09T21:18:48.138959+0000 mon.a (mon.0) 851 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T21:18:48.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:48 vm07 bash[28052]: cluster 2026-03-09T21:18:48.138959+0000 mon.a (mon.0) 851 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T21:18:49.115 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:18:48 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:18:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:18:49.143 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_blocklist_add PASSED [ 19%] 2026-03-09T21:18:49.165 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_get_cluster_stats PASSED [ 20%] 2026-03-09T21:18:49.179 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_monitor_log PASSED [ 21%] 2026-03-09T21:18:49.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:49 vm10 bash[23387]: audit 2026-03-09T21:18:48.175746+0000 mon.a (mon.0) 852 : audit [INF] from='client.? 192.168.123.107:0/4253722863' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-09T21:18:49.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:49 vm10 bash[23387]: audit 2026-03-09T21:18:48.175746+0000 mon.a (mon.0) 852 : audit [INF] from='client.? 192.168.123.107:0/4253722863' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-09T21:18:49.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:49 vm10 bash[23387]: audit 2026-03-09T21:18:49.134278+0000 mon.a (mon.0) 853 : audit [INF] from='client.? 192.168.123.107:0/4253722863' entity='client.admin' cmd='[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]': finished 2026-03-09T21:18:49.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:49 vm10 bash[23387]: audit 2026-03-09T21:18:49.134278+0000 mon.a (mon.0) 853 : audit [INF] from='client.? 192.168.123.107:0/4253722863' entity='client.admin' cmd='[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]': finished 2026-03-09T21:18:49.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:49 vm10 bash[23387]: cluster 2026-03-09T21:18:49.141442+0000 mon.a (mon.0) 854 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-09T21:18:49.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:49 vm10 bash[23387]: cluster 2026-03-09T21:18:49.141442+0000 mon.a (mon.0) 854 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-09T21:18:49.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:49 vm07 bash[20771]: audit 2026-03-09T21:18:48.175746+0000 mon.a (mon.0) 852 : audit [INF] from='client.? 192.168.123.107:0/4253722863' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-09T21:18:49.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:49 vm07 bash[20771]: audit 2026-03-09T21:18:48.175746+0000 mon.a (mon.0) 852 : audit [INF] from='client.? 192.168.123.107:0/4253722863' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-09T21:18:49.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:49 vm07 bash[20771]: audit 2026-03-09T21:18:49.134278+0000 mon.a (mon.0) 853 : audit [INF] from='client.? 192.168.123.107:0/4253722863' entity='client.admin' cmd='[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]': finished 2026-03-09T21:18:49.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:49 vm07 bash[20771]: audit 2026-03-09T21:18:49.134278+0000 mon.a (mon.0) 853 : audit [INF] from='client.? 192.168.123.107:0/4253722863' entity='client.admin' cmd='[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]': finished 2026-03-09T21:18:49.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:49 vm07 bash[20771]: cluster 2026-03-09T21:18:49.141442+0000 mon.a (mon.0) 854 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-09T21:18:49.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:49 vm07 bash[20771]: cluster 2026-03-09T21:18:49.141442+0000 mon.a (mon.0) 854 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-09T21:18:49.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:49 vm07 bash[28052]: audit 2026-03-09T21:18:48.175746+0000 mon.a (mon.0) 852 : audit [INF] from='client.? 192.168.123.107:0/4253722863' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-09T21:18:49.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:49 vm07 bash[28052]: audit 2026-03-09T21:18:48.175746+0000 mon.a (mon.0) 852 : audit [INF] from='client.? 192.168.123.107:0/4253722863' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-09T21:18:49.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:49 vm07 bash[28052]: audit 2026-03-09T21:18:49.134278+0000 mon.a (mon.0) 853 : audit [INF] from='client.? 192.168.123.107:0/4253722863' entity='client.admin' cmd='[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]': finished 2026-03-09T21:18:49.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:49 vm07 bash[28052]: audit 2026-03-09T21:18:49.134278+0000 mon.a (mon.0) 853 : audit [INF] from='client.? 192.168.123.107:0/4253722863' entity='client.admin' cmd='[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]': finished 2026-03-09T21:18:49.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:49 vm07 bash[28052]: cluster 2026-03-09T21:18:49.141442+0000 mon.a (mon.0) 854 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-09T21:18:49.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:49 vm07 bash[28052]: cluster 2026-03-09T21:18:49.141442+0000 mon.a (mon.0) 854 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-09T21:18:50.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:50 vm07 bash[20771]: cluster 2026-03-09T21:18:49.720332+0000 mon.a (mon.0) 855 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:18:50.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:50 vm07 bash[20771]: cluster 2026-03-09T21:18:49.720332+0000 mon.a (mon.0) 855 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:18:50.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:50 vm07 bash[20771]: cluster 2026-03-09T21:18:49.743440+0000 mon.a (mon.0) 856 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-09T21:18:50.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:50 vm07 bash[20771]: cluster 2026-03-09T21:18:49.743440+0000 mon.a (mon.0) 856 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-09T21:18:50.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:50 vm07 bash[20771]: cluster 2026-03-09T21:18:49.747768+0000 mgr.y (mgr.24416) 113 : cluster [DBG] pgmap v95: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 219 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T21:18:50.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:50 vm07 bash[20771]: cluster 2026-03-09T21:18:49.747768+0000 mgr.y (mgr.24416) 113 : cluster [DBG] pgmap v95: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 219 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T21:18:50.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:50 vm07 bash[20771]: audit 2026-03-09T21:18:49.748535+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.107:0/644364752' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:18:50.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:50 vm07 bash[20771]: audit 2026-03-09T21:18:49.748535+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.107:0/644364752' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:18:50.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:50 vm07 bash[20771]: audit 2026-03-09T21:18:49.752390+0000 mon.a (mon.0) 857 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:18:50.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:50 vm07 bash[20771]: audit 2026-03-09T21:18:49.752390+0000 mon.a (mon.0) 857 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:18:50.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:50 vm07 bash[28052]: cluster 2026-03-09T21:18:49.720332+0000 mon.a (mon.0) 855 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:18:50.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:50 vm07 bash[28052]: cluster 2026-03-09T21:18:49.720332+0000 mon.a (mon.0) 855 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:18:50.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:50 vm07 bash[28052]: cluster 2026-03-09T21:18:49.743440+0000 mon.a (mon.0) 856 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-09T21:18:50.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:50 vm07 bash[28052]: cluster 2026-03-09T21:18:49.743440+0000 mon.a (mon.0) 856 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-09T21:18:50.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:50 vm07 bash[28052]: cluster 2026-03-09T21:18:49.747768+0000 mgr.y (mgr.24416) 113 : cluster [DBG] pgmap v95: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 219 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T21:18:50.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:50 vm07 bash[28052]: cluster 2026-03-09T21:18:49.747768+0000 mgr.y (mgr.24416) 113 : cluster [DBG] pgmap v95: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 219 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T21:18:50.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:50 vm07 bash[28052]: audit 2026-03-09T21:18:49.748535+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.107:0/644364752' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:18:50.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:50 vm07 bash[28052]: audit 2026-03-09T21:18:49.748535+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.107:0/644364752' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:18:50.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:50 vm07 bash[28052]: audit 2026-03-09T21:18:49.752390+0000 mon.a (mon.0) 857 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:18:50.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:50 vm07 bash[28052]: audit 2026-03-09T21:18:49.752390+0000 mon.a (mon.0) 857 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:18:50.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:50 vm10 bash[23387]: cluster 2026-03-09T21:18:49.720332+0000 mon.a (mon.0) 855 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:18:50.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:50 vm10 bash[23387]: cluster 2026-03-09T21:18:49.720332+0000 mon.a (mon.0) 855 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:18:50.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:50 vm10 bash[23387]: cluster 2026-03-09T21:18:49.743440+0000 mon.a (mon.0) 856 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-09T21:18:50.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:50 vm10 bash[23387]: cluster 2026-03-09T21:18:49.743440+0000 mon.a (mon.0) 856 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-09T21:18:50.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:50 vm10 bash[23387]: cluster 2026-03-09T21:18:49.747768+0000 mgr.y (mgr.24416) 113 : cluster [DBG] pgmap v95: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 219 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T21:18:50.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:50 vm10 bash[23387]: cluster 2026-03-09T21:18:49.747768+0000 mgr.y (mgr.24416) 113 : cluster [DBG] pgmap v95: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 219 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T21:18:50.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:50 vm10 bash[23387]: audit 2026-03-09T21:18:49.748535+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.107:0/644364752' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:18:50.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:50 vm10 bash[23387]: audit 2026-03-09T21:18:49.748535+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.107:0/644364752' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:18:50.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:50 vm10 bash[23387]: audit 2026-03-09T21:18:49.752390+0000 mon.a (mon.0) 857 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:18:50.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:50 vm10 bash[23387]: audit 2026-03-09T21:18:49.752390+0000 mon.a (mon.0) 857 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:18:51.749 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_get_last_version PASSED [ 23%] 2026-03-09T21:18:52.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:51 vm07 bash[20771]: audit 2026-03-09T21:18:50.733026+0000 mon.a (mon.0) 858 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:18:52.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:51 vm07 bash[20771]: audit 2026-03-09T21:18:50.733026+0000 mon.a (mon.0) 858 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:18:52.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:51 vm07 bash[20771]: cluster 2026-03-09T21:18:50.751221+0000 mon.a (mon.0) 859 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-09T21:18:52.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:51 vm07 bash[20771]: cluster 2026-03-09T21:18:50.751221+0000 mon.a (mon.0) 859 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-09T21:18:52.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:51 vm07 bash[28052]: audit 2026-03-09T21:18:50.733026+0000 mon.a (mon.0) 858 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:18:52.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:51 vm07 bash[28052]: audit 2026-03-09T21:18:50.733026+0000 mon.a (mon.0) 858 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:18:52.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:51 vm07 bash[28052]: cluster 2026-03-09T21:18:50.751221+0000 mon.a (mon.0) 859 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-09T21:18:52.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:51 vm07 bash[28052]: cluster 2026-03-09T21:18:50.751221+0000 mon.a (mon.0) 859 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-09T21:18:52.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:51 vm10 bash[23387]: audit 2026-03-09T21:18:50.733026+0000 mon.a (mon.0) 858 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:18:52.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:51 vm10 bash[23387]: audit 2026-03-09T21:18:50.733026+0000 mon.a (mon.0) 858 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:18:52.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:51 vm10 bash[23387]: cluster 2026-03-09T21:18:50.751221+0000 mon.a (mon.0) 859 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-09T21:18:52.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:51 vm10 bash[23387]: cluster 2026-03-09T21:18:50.751221+0000 mon.a (mon.0) 859 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-09T21:18:53.112 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:52 vm10 bash[23387]: cluster 2026-03-09T21:18:51.738935+0000 mon.a (mon.0) 860 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-09T21:18:53.112 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:52 vm10 bash[23387]: cluster 2026-03-09T21:18:51.738935+0000 mon.a (mon.0) 860 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-09T21:18:53.112 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:52 vm10 bash[23387]: cluster 2026-03-09T21:18:51.748152+0000 mgr.y (mgr.24416) 114 : cluster [DBG] pgmap v98: 164 pgs: 164 active+clean; 455 KiB data, 219 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:18:53.112 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:52 vm10 bash[23387]: cluster 2026-03-09T21:18:51.748152+0000 mgr.y (mgr.24416) 114 : cluster [DBG] pgmap v98: 164 pgs: 164 active+clean; 455 KiB data, 219 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:18:53.114 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:52 vm07 bash[20771]: cluster 2026-03-09T21:18:51.738935+0000 mon.a (mon.0) 860 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-09T21:18:53.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:52 vm07 bash[20771]: cluster 2026-03-09T21:18:51.738935+0000 mon.a (mon.0) 860 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-09T21:18:53.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:52 vm07 bash[20771]: cluster 2026-03-09T21:18:51.748152+0000 mgr.y (mgr.24416) 114 : cluster [DBG] pgmap v98: 164 pgs: 164 active+clean; 455 KiB data, 219 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:18:53.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:52 vm07 bash[20771]: cluster 2026-03-09T21:18:51.748152+0000 mgr.y (mgr.24416) 114 : cluster [DBG] pgmap v98: 164 pgs: 164 active+clean; 455 KiB data, 219 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:18:53.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:52 vm07 bash[28052]: cluster 2026-03-09T21:18:51.738935+0000 mon.a (mon.0) 860 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-09T21:18:53.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:52 vm07 bash[28052]: cluster 2026-03-09T21:18:51.738935+0000 mon.a (mon.0) 860 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-09T21:18:53.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:52 vm07 bash[28052]: cluster 2026-03-09T21:18:51.748152+0000 mgr.y (mgr.24416) 114 : cluster [DBG] pgmap v98: 164 pgs: 164 active+clean; 455 KiB data, 219 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:18:53.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:52 vm07 bash[28052]: cluster 2026-03-09T21:18:51.748152+0000 mgr.y (mgr.24416) 114 : cluster [DBG] pgmap v98: 164 pgs: 164 active+clean; 455 KiB data, 219 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:18:54.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:53 vm07 bash[20771]: cluster 2026-03-09T21:18:52.769311+0000 mon.a (mon.0) 861 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-09T21:18:54.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:53 vm07 bash[20771]: cluster 2026-03-09T21:18:52.769311+0000 mon.a (mon.0) 861 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-09T21:18:54.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:53 vm07 bash[20771]: audit 2026-03-09T21:18:52.783371+0000 mon.a (mon.0) 862 : audit [INF] from='client.? 192.168.123.107:0/1036518655' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:18:54.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:53 vm07 bash[20771]: audit 2026-03-09T21:18:52.783371+0000 mon.a (mon.0) 862 : audit [INF] from='client.? 192.168.123.107:0/1036518655' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:18:54.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:53 vm07 bash[28052]: cluster 2026-03-09T21:18:52.769311+0000 mon.a (mon.0) 861 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-09T21:18:54.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:53 vm07 bash[28052]: cluster 2026-03-09T21:18:52.769311+0000 mon.a (mon.0) 861 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-09T21:18:54.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:53 vm07 bash[28052]: audit 2026-03-09T21:18:52.783371+0000 mon.a (mon.0) 862 : audit [INF] from='client.? 192.168.123.107:0/1036518655' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:18:54.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:53 vm07 bash[28052]: audit 2026-03-09T21:18:52.783371+0000 mon.a (mon.0) 862 : audit [INF] from='client.? 192.168.123.107:0/1036518655' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:18:54.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:53 vm10 bash[23387]: cluster 2026-03-09T21:18:52.769311+0000 mon.a (mon.0) 861 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-09T21:18:54.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:53 vm10 bash[23387]: cluster 2026-03-09T21:18:52.769311+0000 mon.a (mon.0) 861 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-09T21:18:54.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:53 vm10 bash[23387]: audit 2026-03-09T21:18:52.783371+0000 mon.a (mon.0) 862 : audit [INF] from='client.? 192.168.123.107:0/1036518655' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:18:54.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:53 vm10 bash[23387]: audit 2026-03-09T21:18:52.783371+0000 mon.a (mon.0) 862 : audit [INF] from='client.? 192.168.123.107:0/1036518655' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:18:54.831 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_get_stats PASSED [ 24%] 2026-03-09T21:18:55.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:54 vm07 bash[20771]: cluster 2026-03-09T21:18:53.748457+0000 mgr.y (mgr.24416) 115 : cluster [DBG] pgmap v100: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:55.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:54 vm07 bash[20771]: cluster 2026-03-09T21:18:53.748457+0000 mgr.y (mgr.24416) 115 : cluster [DBG] pgmap v100: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:55.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:54 vm07 bash[20771]: audit 2026-03-09T21:18:53.805299+0000 mon.a (mon.0) 863 : audit [INF] from='client.? 192.168.123.107:0/1036518655' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:18:55.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:54 vm07 bash[20771]: audit 2026-03-09T21:18:53.805299+0000 mon.a (mon.0) 863 : audit [INF] from='client.? 192.168.123.107:0/1036518655' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:18:55.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:54 vm07 bash[20771]: cluster 2026-03-09T21:18:53.821255+0000 mon.a (mon.0) 864 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-09T21:18:55.366 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:54 vm07 bash[20771]: cluster 2026-03-09T21:18:53.821255+0000 mon.a (mon.0) 864 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-09T21:18:55.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:54 vm07 bash[28052]: cluster 2026-03-09T21:18:53.748457+0000 mgr.y (mgr.24416) 115 : cluster [DBG] pgmap v100: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:55.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:54 vm07 bash[28052]: cluster 2026-03-09T21:18:53.748457+0000 mgr.y (mgr.24416) 115 : cluster [DBG] pgmap v100: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:55.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:54 vm07 bash[28052]: audit 2026-03-09T21:18:53.805299+0000 mon.a (mon.0) 863 : audit [INF] from='client.? 192.168.123.107:0/1036518655' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:18:55.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:54 vm07 bash[28052]: audit 2026-03-09T21:18:53.805299+0000 mon.a (mon.0) 863 : audit [INF] from='client.? 192.168.123.107:0/1036518655' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:18:55.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:54 vm07 bash[28052]: cluster 2026-03-09T21:18:53.821255+0000 mon.a (mon.0) 864 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-09T21:18:55.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:54 vm07 bash[28052]: cluster 2026-03-09T21:18:53.821255+0000 mon.a (mon.0) 864 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-09T21:18:55.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:54 vm10 bash[23387]: cluster 2026-03-09T21:18:53.748457+0000 mgr.y (mgr.24416) 115 : cluster [DBG] pgmap v100: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:55.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:54 vm10 bash[23387]: cluster 2026-03-09T21:18:53.748457+0000 mgr.y (mgr.24416) 115 : cluster [DBG] pgmap v100: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:55.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:54 vm10 bash[23387]: audit 2026-03-09T21:18:53.805299+0000 mon.a (mon.0) 863 : audit [INF] from='client.? 192.168.123.107:0/1036518655' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:18:55.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:54 vm10 bash[23387]: audit 2026-03-09T21:18:53.805299+0000 mon.a (mon.0) 863 : audit [INF] from='client.? 192.168.123.107:0/1036518655' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:18:55.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:54 vm10 bash[23387]: cluster 2026-03-09T21:18:53.821255+0000 mon.a (mon.0) 864 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-09T21:18:55.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:54 vm10 bash[23387]: cluster 2026-03-09T21:18:53.821255+0000 mon.a (mon.0) 864 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-09T21:18:56.279 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:55 vm10 bash[23387]: cluster 2026-03-09T21:18:54.812773+0000 mon.a (mon.0) 865 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-09T21:18:56.279 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:55 vm10 bash[23387]: cluster 2026-03-09T21:18:54.812773+0000 mon.a (mon.0) 865 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-09T21:18:56.279 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:55 vm10 bash[23387]: cluster 2026-03-09T21:18:55.748730+0000 mgr.y (mgr.24416) 116 : cluster [DBG] pgmap v103: 164 pgs: 164 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:56.279 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:55 vm10 bash[23387]: cluster 2026-03-09T21:18:55.748730+0000 mgr.y (mgr.24416) 116 : cluster [DBG] pgmap v103: 164 pgs: 164 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:55 vm07 bash[28052]: cluster 2026-03-09T21:18:54.812773+0000 mon.a (mon.0) 865 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-09T21:18:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:55 vm07 bash[28052]: cluster 2026-03-09T21:18:54.812773+0000 mon.a (mon.0) 865 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-09T21:18:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:55 vm07 bash[28052]: cluster 2026-03-09T21:18:55.748730+0000 mgr.y (mgr.24416) 116 : cluster [DBG] pgmap v103: 164 pgs: 164 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:55 vm07 bash[28052]: cluster 2026-03-09T21:18:55.748730+0000 mgr.y (mgr.24416) 116 : cluster [DBG] pgmap v103: 164 pgs: 164 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:56.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:55 vm07 bash[20771]: cluster 2026-03-09T21:18:54.812773+0000 mon.a (mon.0) 865 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-09T21:18:56.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:55 vm07 bash[20771]: cluster 2026-03-09T21:18:54.812773+0000 mon.a (mon.0) 865 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-09T21:18:56.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:55 vm07 bash[20771]: cluster 2026-03-09T21:18:55.748730+0000 mgr.y (mgr.24416) 116 : cluster [DBG] pgmap v103: 164 pgs: 164 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:56.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:55 vm07 bash[20771]: cluster 2026-03-09T21:18:55.748730+0000 mgr.y (mgr.24416) 116 : cluster [DBG] pgmap v103: 164 pgs: 164 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:18:56.692 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:18:56 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:18:57.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:56 vm07 bash[20771]: cluster 2026-03-09T21:18:55.947625+0000 mon.a (mon.0) 866 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:18:57.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:56 vm07 bash[20771]: cluster 2026-03-09T21:18:55.947625+0000 mon.a (mon.0) 866 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:18:57.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:56 vm07 bash[20771]: cluster 2026-03-09T21:18:55.993068+0000 mon.a (mon.0) 867 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T21:18:57.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:56 vm07 bash[20771]: cluster 2026-03-09T21:18:55.993068+0000 mon.a (mon.0) 867 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T21:18:57.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:56 vm07 bash[20771]: audit 2026-03-09T21:18:56.279211+0000 mgr.y (mgr.24416) 117 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:57.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:56 vm07 bash[20771]: audit 2026-03-09T21:18:56.279211+0000 mgr.y (mgr.24416) 117 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:57.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:56 vm07 bash[20771]: audit 2026-03-09T21:18:56.933602+0000 mon.c (mon.2) 70 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:18:57.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:56 vm07 bash[20771]: audit 2026-03-09T21:18:56.933602+0000 mon.c (mon.2) 70 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:18:57.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:56 vm07 bash[28052]: cluster 2026-03-09T21:18:55.947625+0000 mon.a (mon.0) 866 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:18:57.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:56 vm07 bash[28052]: cluster 2026-03-09T21:18:55.947625+0000 mon.a (mon.0) 866 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:18:57.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:56 vm07 bash[28052]: cluster 2026-03-09T21:18:55.993068+0000 mon.a (mon.0) 867 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T21:18:57.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:56 vm07 bash[28052]: cluster 2026-03-09T21:18:55.993068+0000 mon.a (mon.0) 867 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T21:18:57.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:56 vm07 bash[28052]: audit 2026-03-09T21:18:56.279211+0000 mgr.y (mgr.24416) 117 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:57.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:56 vm07 bash[28052]: audit 2026-03-09T21:18:56.279211+0000 mgr.y (mgr.24416) 117 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:57.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:56 vm07 bash[28052]: audit 2026-03-09T21:18:56.933602+0000 mon.c (mon.2) 70 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:18:57.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:56 vm07 bash[28052]: audit 2026-03-09T21:18:56.933602+0000 mon.c (mon.2) 70 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:18:57.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:56 vm10 bash[23387]: cluster 2026-03-09T21:18:55.947625+0000 mon.a (mon.0) 866 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:18:57.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:56 vm10 bash[23387]: cluster 2026-03-09T21:18:55.947625+0000 mon.a (mon.0) 866 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:18:57.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:56 vm10 bash[23387]: cluster 2026-03-09T21:18:55.993068+0000 mon.a (mon.0) 867 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T21:18:57.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:56 vm10 bash[23387]: cluster 2026-03-09T21:18:55.993068+0000 mon.a (mon.0) 867 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-09T21:18:57.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:56 vm10 bash[23387]: audit 2026-03-09T21:18:56.279211+0000 mgr.y (mgr.24416) 117 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:57.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:56 vm10 bash[23387]: audit 2026-03-09T21:18:56.279211+0000 mgr.y (mgr.24416) 117 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:18:57.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:56 vm10 bash[23387]: audit 2026-03-09T21:18:56.933602+0000 mon.c (mon.2) 70 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:18:57.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:56 vm10 bash[23387]: audit 2026-03-09T21:18:56.933602+0000 mon.c (mon.2) 70 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:18:58.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:57 vm07 bash[20771]: cluster 2026-03-09T21:18:57.012052+0000 mon.a (mon.0) 868 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-09T21:18:58.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:57 vm07 bash[20771]: cluster 2026-03-09T21:18:57.012052+0000 mon.a (mon.0) 868 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-09T21:18:58.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:57 vm07 bash[20771]: audit 2026-03-09T21:18:57.040880+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.107:0/1408339198' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:18:58.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:57 vm07 bash[20771]: audit 2026-03-09T21:18:57.040880+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.107:0/1408339198' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:18:58.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:57 vm07 bash[20771]: audit 2026-03-09T21:18:57.041307+0000 mon.a (mon.0) 869 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:18:58.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:57 vm07 bash[20771]: audit 2026-03-09T21:18:57.041307+0000 mon.a (mon.0) 869 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:18:58.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:57 vm07 bash[20771]: cluster 2026-03-09T21:18:57.749189+0000 mgr.y (mgr.24416) 118 : cluster [DBG] pgmap v106: 196 pgs: 6 unknown, 190 active+clean; 455 KiB data, 262 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:18:58.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:57 vm07 bash[20771]: cluster 2026-03-09T21:18:57.749189+0000 mgr.y (mgr.24416) 118 : cluster [DBG] pgmap v106: 196 pgs: 6 unknown, 190 active+clean; 455 KiB data, 262 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:18:58.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:58 vm07 bash[28052]: cluster 2026-03-09T21:18:57.012052+0000 mon.a (mon.0) 868 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-09T21:18:58.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:58 vm07 bash[28052]: cluster 2026-03-09T21:18:57.012052+0000 mon.a (mon.0) 868 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-09T21:18:58.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:58 vm07 bash[28052]: audit 2026-03-09T21:18:57.040880+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.107:0/1408339198' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:18:58.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:58 vm07 bash[28052]: audit 2026-03-09T21:18:57.040880+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.107:0/1408339198' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:18:58.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:58 vm07 bash[28052]: audit 2026-03-09T21:18:57.041307+0000 mon.a (mon.0) 869 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:18:58.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:58 vm07 bash[28052]: audit 2026-03-09T21:18:57.041307+0000 mon.a (mon.0) 869 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:18:58.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:58 vm07 bash[28052]: cluster 2026-03-09T21:18:57.749189+0000 mgr.y (mgr.24416) 118 : cluster [DBG] pgmap v106: 196 pgs: 6 unknown, 190 active+clean; 455 KiB data, 262 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:18:58.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:58 vm07 bash[28052]: cluster 2026-03-09T21:18:57.749189+0000 mgr.y (mgr.24416) 118 : cluster [DBG] pgmap v106: 196 pgs: 6 unknown, 190 active+clean; 455 KiB data, 262 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:18:58.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:57 vm10 bash[23387]: cluster 2026-03-09T21:18:57.012052+0000 mon.a (mon.0) 868 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-09T21:18:58.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:57 vm10 bash[23387]: cluster 2026-03-09T21:18:57.012052+0000 mon.a (mon.0) 868 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-09T21:18:58.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:57 vm10 bash[23387]: audit 2026-03-09T21:18:57.040880+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.107:0/1408339198' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:18:58.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:57 vm10 bash[23387]: audit 2026-03-09T21:18:57.040880+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.107:0/1408339198' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:18:58.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:57 vm10 bash[23387]: audit 2026-03-09T21:18:57.041307+0000 mon.a (mon.0) 869 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:18:58.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:57 vm10 bash[23387]: audit 2026-03-09T21:18:57.041307+0000 mon.a (mon.0) 869 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:18:58.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:57 vm10 bash[23387]: cluster 2026-03-09T21:18:57.749189+0000 mgr.y (mgr.24416) 118 : cluster [DBG] pgmap v106: 196 pgs: 6 unknown, 190 active+clean; 455 KiB data, 262 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:18:58.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:57 vm10 bash[23387]: cluster 2026-03-09T21:18:57.749189+0000 mgr.y (mgr.24416) 118 : cluster [DBG] pgmap v106: 196 pgs: 6 unknown, 190 active+clean; 455 KiB data, 262 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:18:59.115 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:18:58 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:18:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:18:59.179 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_write PASSED [ 25%] 2026-03-09T21:18:59.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:59 vm10 bash[23387]: audit 2026-03-09T21:18:58.159792+0000 mon.a (mon.0) 870 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:18:59.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:59 vm10 bash[23387]: audit 2026-03-09T21:18:58.159792+0000 mon.a (mon.0) 870 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:18:59.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:59 vm10 bash[23387]: cluster 2026-03-09T21:18:58.168538+0000 mon.a (mon.0) 871 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-09T21:18:59.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:18:59 vm10 bash[23387]: cluster 2026-03-09T21:18:58.168538+0000 mon.a (mon.0) 871 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-09T21:18:59.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:59 vm07 bash[20771]: audit 2026-03-09T21:18:58.159792+0000 mon.a (mon.0) 870 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:18:59.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:59 vm07 bash[20771]: audit 2026-03-09T21:18:58.159792+0000 mon.a (mon.0) 870 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:18:59.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:59 vm07 bash[20771]: cluster 2026-03-09T21:18:58.168538+0000 mon.a (mon.0) 871 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-09T21:18:59.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:18:59 vm07 bash[20771]: cluster 2026-03-09T21:18:58.168538+0000 mon.a (mon.0) 871 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-09T21:18:59.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:59 vm07 bash[28052]: audit 2026-03-09T21:18:58.159792+0000 mon.a (mon.0) 870 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:18:59.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:59 vm07 bash[28052]: audit 2026-03-09T21:18:58.159792+0000 mon.a (mon.0) 870 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:18:59.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:59 vm07 bash[28052]: cluster 2026-03-09T21:18:58.168538+0000 mon.a (mon.0) 871 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-09T21:18:59.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:18:59 vm07 bash[28052]: cluster 2026-03-09T21:18:58.168538+0000 mon.a (mon.0) 871 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-09T21:19:00.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:00 vm10 bash[23387]: cluster 2026-03-09T21:18:59.177885+0000 mon.a (mon.0) 872 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-09T21:19:00.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:00 vm10 bash[23387]: cluster 2026-03-09T21:18:59.177885+0000 mon.a (mon.0) 872 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-09T21:19:00.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:00 vm10 bash[23387]: cluster 2026-03-09T21:18:59.749453+0000 mgr.y (mgr.24416) 119 : cluster [DBG] pgmap v109: 164 pgs: 164 active+clean; 455 KiB data, 262 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:00.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:00 vm10 bash[23387]: cluster 2026-03-09T21:18:59.749453+0000 mgr.y (mgr.24416) 119 : cluster [DBG] pgmap v109: 164 pgs: 164 active+clean; 455 KiB data, 262 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:00.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:00 vm07 bash[20771]: cluster 2026-03-09T21:18:59.177885+0000 mon.a (mon.0) 872 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-09T21:19:00.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:00 vm07 bash[20771]: cluster 2026-03-09T21:18:59.177885+0000 mon.a (mon.0) 872 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-09T21:19:00.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:00 vm07 bash[20771]: cluster 2026-03-09T21:18:59.749453+0000 mgr.y (mgr.24416) 119 : cluster [DBG] pgmap v109: 164 pgs: 164 active+clean; 455 KiB data, 262 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:00.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:00 vm07 bash[20771]: cluster 2026-03-09T21:18:59.749453+0000 mgr.y (mgr.24416) 119 : cluster [DBG] pgmap v109: 164 pgs: 164 active+clean; 455 KiB data, 262 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:00.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:00 vm07 bash[28052]: cluster 2026-03-09T21:18:59.177885+0000 mon.a (mon.0) 872 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-09T21:19:00.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:00 vm07 bash[28052]: cluster 2026-03-09T21:18:59.177885+0000 mon.a (mon.0) 872 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-09T21:19:00.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:00 vm07 bash[28052]: cluster 2026-03-09T21:18:59.749453+0000 mgr.y (mgr.24416) 119 : cluster [DBG] pgmap v109: 164 pgs: 164 active+clean; 455 KiB data, 262 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:00.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:00 vm07 bash[28052]: cluster 2026-03-09T21:18:59.749453+0000 mgr.y (mgr.24416) 119 : cluster [DBG] pgmap v109: 164 pgs: 164 active+clean; 455 KiB data, 262 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:01.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:01 vm10 bash[23387]: cluster 2026-03-09T21:19:00.203398+0000 mon.a (mon.0) 873 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-09T21:19:01.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:01 vm10 bash[23387]: cluster 2026-03-09T21:19:00.203398+0000 mon.a (mon.0) 873 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-09T21:19:01.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:01 vm07 bash[20771]: cluster 2026-03-09T21:19:00.203398+0000 mon.a (mon.0) 873 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-09T21:19:01.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:01 vm07 bash[20771]: cluster 2026-03-09T21:19:00.203398+0000 mon.a (mon.0) 873 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-09T21:19:01.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:01 vm07 bash[28052]: cluster 2026-03-09T21:19:00.203398+0000 mon.a (mon.0) 873 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-09T21:19:01.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:01 vm07 bash[28052]: cluster 2026-03-09T21:19:00.203398+0000 mon.a (mon.0) 873 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-09T21:19:02.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:02 vm10 bash[23387]: cluster 2026-03-09T21:19:01.348221+0000 mon.a (mon.0) 874 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-09T21:19:02.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:02 vm10 bash[23387]: cluster 2026-03-09T21:19:01.348221+0000 mon.a (mon.0) 874 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-09T21:19:02.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:02 vm10 bash[23387]: audit 2026-03-09T21:19:01.408264+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.107:0/860538250' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:02.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:02 vm10 bash[23387]: audit 2026-03-09T21:19:01.408264+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.107:0/860538250' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:02.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:02 vm10 bash[23387]: audit 2026-03-09T21:19:01.408844+0000 mon.a (mon.0) 875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:02.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:02 vm10 bash[23387]: audit 2026-03-09T21:19:01.408844+0000 mon.a (mon.0) 875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:02.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:02 vm10 bash[23387]: cluster 2026-03-09T21:19:01.749838+0000 mgr.y (mgr.24416) 120 : cluster [DBG] pgmap v112: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 262 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:02.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:02 vm10 bash[23387]: cluster 2026-03-09T21:19:01.749838+0000 mgr.y (mgr.24416) 120 : cluster [DBG] pgmap v112: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 262 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:02.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:02 vm10 bash[23387]: audit 2026-03-09T21:19:02.289725+0000 mon.a (mon.0) 876 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:02.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:02 vm10 bash[23387]: audit 2026-03-09T21:19:02.289725+0000 mon.a (mon.0) 876 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:02.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:02 vm10 bash[23387]: cluster 2026-03-09T21:19:02.299781+0000 mon.a (mon.0) 877 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-09T21:19:02.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:02 vm10 bash[23387]: cluster 2026-03-09T21:19:02.299781+0000 mon.a (mon.0) 877 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-09T21:19:02.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:02 vm07 bash[20771]: cluster 2026-03-09T21:19:01.348221+0000 mon.a (mon.0) 874 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-09T21:19:02.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:02 vm07 bash[20771]: cluster 2026-03-09T21:19:01.348221+0000 mon.a (mon.0) 874 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-09T21:19:02.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:02 vm07 bash[20771]: audit 2026-03-09T21:19:01.408264+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.107:0/860538250' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:02.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:02 vm07 bash[20771]: audit 2026-03-09T21:19:01.408264+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.107:0/860538250' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:02.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:02 vm07 bash[20771]: audit 2026-03-09T21:19:01.408844+0000 mon.a (mon.0) 875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:02.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:02 vm07 bash[20771]: audit 2026-03-09T21:19:01.408844+0000 mon.a (mon.0) 875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:02.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:02 vm07 bash[20771]: cluster 2026-03-09T21:19:01.749838+0000 mgr.y (mgr.24416) 120 : cluster [DBG] pgmap v112: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 262 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:02.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:02 vm07 bash[20771]: cluster 2026-03-09T21:19:01.749838+0000 mgr.y (mgr.24416) 120 : cluster [DBG] pgmap v112: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 262 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:02.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:02 vm07 bash[20771]: audit 2026-03-09T21:19:02.289725+0000 mon.a (mon.0) 876 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:02.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:02 vm07 bash[20771]: audit 2026-03-09T21:19:02.289725+0000 mon.a (mon.0) 876 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:02.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:02 vm07 bash[20771]: cluster 2026-03-09T21:19:02.299781+0000 mon.a (mon.0) 877 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-09T21:19:02.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:02 vm07 bash[20771]: cluster 2026-03-09T21:19:02.299781+0000 mon.a (mon.0) 877 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-09T21:19:02.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:02 vm07 bash[28052]: cluster 2026-03-09T21:19:01.348221+0000 mon.a (mon.0) 874 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-09T21:19:02.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:02 vm07 bash[28052]: cluster 2026-03-09T21:19:01.348221+0000 mon.a (mon.0) 874 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-09T21:19:02.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:02 vm07 bash[28052]: audit 2026-03-09T21:19:01.408264+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.107:0/860538250' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:02.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:02 vm07 bash[28052]: audit 2026-03-09T21:19:01.408264+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.107:0/860538250' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:02.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:02 vm07 bash[28052]: audit 2026-03-09T21:19:01.408844+0000 mon.a (mon.0) 875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:02.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:02 vm07 bash[28052]: audit 2026-03-09T21:19:01.408844+0000 mon.a (mon.0) 875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:02.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:02 vm07 bash[28052]: cluster 2026-03-09T21:19:01.749838+0000 mgr.y (mgr.24416) 120 : cluster [DBG] pgmap v112: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 262 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:02.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:02 vm07 bash[28052]: cluster 2026-03-09T21:19:01.749838+0000 mgr.y (mgr.24416) 120 : cluster [DBG] pgmap v112: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 262 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:02.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:02 vm07 bash[28052]: audit 2026-03-09T21:19:02.289725+0000 mon.a (mon.0) 876 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:02.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:02 vm07 bash[28052]: audit 2026-03-09T21:19:02.289725+0000 mon.a (mon.0) 876 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:02.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:02 vm07 bash[28052]: cluster 2026-03-09T21:19:02.299781+0000 mon.a (mon.0) 877 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-09T21:19:02.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:02 vm07 bash[28052]: cluster 2026-03-09T21:19:02.299781+0000 mon.a (mon.0) 877 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-09T21:19:03.311 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_write_full PASSED [ 26%] 2026-03-09T21:19:03.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:03 vm10 bash[23387]: cluster 2026-03-09T21:19:02.374106+0000 mon.a (mon.0) 878 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:19:03.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:03 vm10 bash[23387]: cluster 2026-03-09T21:19:02.374106+0000 mon.a (mon.0) 878 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:19:03.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:03 vm10 bash[23387]: cluster 2026-03-09T21:19:03.310110+0000 mon.a (mon.0) 879 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-09T21:19:03.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:03 vm10 bash[23387]: cluster 2026-03-09T21:19:03.310110+0000 mon.a (mon.0) 879 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-09T21:19:03.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:03 vm07 bash[28052]: cluster 2026-03-09T21:19:02.374106+0000 mon.a (mon.0) 878 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:19:03.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:03 vm07 bash[28052]: cluster 2026-03-09T21:19:02.374106+0000 mon.a (mon.0) 878 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:19:03.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:03 vm07 bash[28052]: cluster 2026-03-09T21:19:03.310110+0000 mon.a (mon.0) 879 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-09T21:19:03.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:03 vm07 bash[28052]: cluster 2026-03-09T21:19:03.310110+0000 mon.a (mon.0) 879 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-09T21:19:03.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:03 vm07 bash[20771]: cluster 2026-03-09T21:19:02.374106+0000 mon.a (mon.0) 878 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:19:03.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:03 vm07 bash[20771]: cluster 2026-03-09T21:19:02.374106+0000 mon.a (mon.0) 878 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:19:03.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:03 vm07 bash[20771]: cluster 2026-03-09T21:19:03.310110+0000 mon.a (mon.0) 879 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-09T21:19:03.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:03 vm07 bash[20771]: cluster 2026-03-09T21:19:03.310110+0000 mon.a (mon.0) 879 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-09T21:19:04.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:04 vm07 bash[20771]: cluster 2026-03-09T21:19:03.750123+0000 mgr.y (mgr.24416) 121 : cluster [DBG] pgmap v115: 164 pgs: 164 active+clean; 455 KiB data, 284 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:04.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:04 vm07 bash[20771]: cluster 2026-03-09T21:19:03.750123+0000 mgr.y (mgr.24416) 121 : cluster [DBG] pgmap v115: 164 pgs: 164 active+clean; 455 KiB data, 284 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:04.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:04 vm07 bash[28052]: cluster 2026-03-09T21:19:03.750123+0000 mgr.y (mgr.24416) 121 : cluster [DBG] pgmap v115: 164 pgs: 164 active+clean; 455 KiB data, 284 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:04.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:04 vm07 bash[28052]: cluster 2026-03-09T21:19:03.750123+0000 mgr.y (mgr.24416) 121 : cluster [DBG] pgmap v115: 164 pgs: 164 active+clean; 455 KiB data, 284 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:04.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:04 vm10 bash[23387]: cluster 2026-03-09T21:19:03.750123+0000 mgr.y (mgr.24416) 121 : cluster [DBG] pgmap v115: 164 pgs: 164 active+clean; 455 KiB data, 284 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:04.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:04 vm10 bash[23387]: cluster 2026-03-09T21:19:03.750123+0000 mgr.y (mgr.24416) 121 : cluster [DBG] pgmap v115: 164 pgs: 164 active+clean; 455 KiB data, 284 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:05.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:05 vm10 bash[23387]: cluster 2026-03-09T21:19:04.504044+0000 mon.a (mon.0) 880 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-09T21:19:05.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:05 vm10 bash[23387]: cluster 2026-03-09T21:19:04.504044+0000 mon.a (mon.0) 880 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-09T21:19:05.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:05 vm10 bash[23387]: cluster 2026-03-09T21:19:05.491503+0000 mon.a (mon.0) 881 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-09T21:19:05.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:05 vm10 bash[23387]: cluster 2026-03-09T21:19:05.491503+0000 mon.a (mon.0) 881 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-09T21:19:05.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:05 vm10 bash[23387]: audit 2026-03-09T21:19:05.568757+0000 mon.c (mon.2) 71 : audit [INF] from='client.? 192.168.123.107:0/99396479' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:05.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:05 vm10 bash[23387]: audit 2026-03-09T21:19:05.568757+0000 mon.c (mon.2) 71 : audit [INF] from='client.? 192.168.123.107:0/99396479' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:05.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:05 vm10 bash[23387]: audit 2026-03-09T21:19:05.569172+0000 mon.a (mon.0) 882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:05.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:05 vm10 bash[23387]: audit 2026-03-09T21:19:05.569172+0000 mon.a (mon.0) 882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:06.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:05 vm07 bash[20771]: cluster 2026-03-09T21:19:04.504044+0000 mon.a (mon.0) 880 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-09T21:19:06.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:05 vm07 bash[20771]: cluster 2026-03-09T21:19:04.504044+0000 mon.a (mon.0) 880 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-09T21:19:06.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:05 vm07 bash[20771]: cluster 2026-03-09T21:19:05.491503+0000 mon.a (mon.0) 881 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-09T21:19:06.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:05 vm07 bash[20771]: cluster 2026-03-09T21:19:05.491503+0000 mon.a (mon.0) 881 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-09T21:19:06.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:05 vm07 bash[20771]: audit 2026-03-09T21:19:05.568757+0000 mon.c (mon.2) 71 : audit [INF] from='client.? 192.168.123.107:0/99396479' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:06.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:05 vm07 bash[20771]: audit 2026-03-09T21:19:05.568757+0000 mon.c (mon.2) 71 : audit [INF] from='client.? 192.168.123.107:0/99396479' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:06.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:05 vm07 bash[20771]: audit 2026-03-09T21:19:05.569172+0000 mon.a (mon.0) 882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:06.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:05 vm07 bash[20771]: audit 2026-03-09T21:19:05.569172+0000 mon.a (mon.0) 882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:06.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:05 vm07 bash[28052]: cluster 2026-03-09T21:19:04.504044+0000 mon.a (mon.0) 880 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-09T21:19:06.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:05 vm07 bash[28052]: cluster 2026-03-09T21:19:04.504044+0000 mon.a (mon.0) 880 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-09T21:19:06.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:05 vm07 bash[28052]: cluster 2026-03-09T21:19:05.491503+0000 mon.a (mon.0) 881 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-09T21:19:06.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:05 vm07 bash[28052]: cluster 2026-03-09T21:19:05.491503+0000 mon.a (mon.0) 881 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-09T21:19:06.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:05 vm07 bash[28052]: audit 2026-03-09T21:19:05.568757+0000 mon.c (mon.2) 71 : audit [INF] from='client.? 192.168.123.107:0/99396479' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:06.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:05 vm07 bash[28052]: audit 2026-03-09T21:19:05.568757+0000 mon.c (mon.2) 71 : audit [INF] from='client.? 192.168.123.107:0/99396479' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:06.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:05 vm07 bash[28052]: audit 2026-03-09T21:19:05.569172+0000 mon.a (mon.0) 882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:06.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:05 vm07 bash[28052]: audit 2026-03-09T21:19:05.569172+0000 mon.a (mon.0) 882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:06.678 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:19:06 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:19:06.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:06 vm10 bash[23387]: cluster 2026-03-09T21:19:05.750410+0000 mgr.y (mgr.24416) 122 : cluster [DBG] pgmap v118: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 284 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:06.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:06 vm10 bash[23387]: cluster 2026-03-09T21:19:05.750410+0000 mgr.y (mgr.24416) 122 : cluster [DBG] pgmap v118: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 284 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:07.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:06 vm07 bash[20771]: cluster 2026-03-09T21:19:05.750410+0000 mgr.y (mgr.24416) 122 : cluster [DBG] pgmap v118: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 284 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:07.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:06 vm07 bash[20771]: cluster 2026-03-09T21:19:05.750410+0000 mgr.y (mgr.24416) 122 : cluster [DBG] pgmap v118: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 284 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:07.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:06 vm07 bash[28052]: cluster 2026-03-09T21:19:05.750410+0000 mgr.y (mgr.24416) 122 : cluster [DBG] pgmap v118: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 284 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:07.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:06 vm07 bash[28052]: cluster 2026-03-09T21:19:05.750410+0000 mgr.y (mgr.24416) 122 : cluster [DBG] pgmap v118: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 284 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:07.696 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_writesame PASSED [ 27%] 2026-03-09T21:19:08.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:07 vm07 bash[20771]: audit 2026-03-09T21:19:06.282314+0000 mgr.y (mgr.24416) 123 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:08.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:07 vm07 bash[20771]: audit 2026-03-09T21:19:06.282314+0000 mgr.y (mgr.24416) 123 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:08.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:07 vm07 bash[20771]: audit 2026-03-09T21:19:06.625369+0000 mon.a (mon.0) 883 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:08.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:07 vm07 bash[20771]: audit 2026-03-09T21:19:06.625369+0000 mon.a (mon.0) 883 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:08.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:07 vm07 bash[20771]: cluster 2026-03-09T21:19:06.628648+0000 mon.a (mon.0) 884 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-09T21:19:08.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:07 vm07 bash[20771]: cluster 2026-03-09T21:19:06.628648+0000 mon.a (mon.0) 884 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-09T21:19:08.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:07 vm07 bash[28052]: audit 2026-03-09T21:19:06.282314+0000 mgr.y (mgr.24416) 123 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:08.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:07 vm07 bash[28052]: audit 2026-03-09T21:19:06.282314+0000 mgr.y (mgr.24416) 123 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:08.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:07 vm07 bash[28052]: audit 2026-03-09T21:19:06.625369+0000 mon.a (mon.0) 883 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:08.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:07 vm07 bash[28052]: audit 2026-03-09T21:19:06.625369+0000 mon.a (mon.0) 883 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:08.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:07 vm07 bash[28052]: cluster 2026-03-09T21:19:06.628648+0000 mon.a (mon.0) 884 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-09T21:19:08.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:07 vm07 bash[28052]: cluster 2026-03-09T21:19:06.628648+0000 mon.a (mon.0) 884 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-09T21:19:08.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:07 vm10 bash[23387]: audit 2026-03-09T21:19:06.282314+0000 mgr.y (mgr.24416) 123 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:08.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:07 vm10 bash[23387]: audit 2026-03-09T21:19:06.282314+0000 mgr.y (mgr.24416) 123 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:08.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:07 vm10 bash[23387]: audit 2026-03-09T21:19:06.625369+0000 mon.a (mon.0) 883 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:08.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:07 vm10 bash[23387]: audit 2026-03-09T21:19:06.625369+0000 mon.a (mon.0) 883 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:08.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:07 vm10 bash[23387]: cluster 2026-03-09T21:19:06.628648+0000 mon.a (mon.0) 884 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-09T21:19:08.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:07 vm10 bash[23387]: cluster 2026-03-09T21:19:06.628648+0000 mon.a (mon.0) 884 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-09T21:19:09.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:08 vm07 bash[20771]: cluster 2026-03-09T21:19:07.686627+0000 mon.a (mon.0) 885 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-09T21:19:09.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:08 vm07 bash[20771]: cluster 2026-03-09T21:19:07.686627+0000 mon.a (mon.0) 885 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-09T21:19:09.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:08 vm07 bash[20771]: cluster 2026-03-09T21:19:07.751015+0000 mgr.y (mgr.24416) 124 : cluster [DBG] pgmap v121: 164 pgs: 164 active+clean; 455 KiB data, 320 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:09.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:08 vm07 bash[20771]: cluster 2026-03-09T21:19:07.751015+0000 mgr.y (mgr.24416) 124 : cluster [DBG] pgmap v121: 164 pgs: 164 active+clean; 455 KiB data, 320 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:09.115 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:19:08 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:19:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:19:09.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:08 vm07 bash[28052]: cluster 2026-03-09T21:19:07.686627+0000 mon.a (mon.0) 885 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-09T21:19:09.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:08 vm07 bash[28052]: cluster 2026-03-09T21:19:07.686627+0000 mon.a (mon.0) 885 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-09T21:19:09.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:08 vm07 bash[28052]: cluster 2026-03-09T21:19:07.751015+0000 mgr.y (mgr.24416) 124 : cluster [DBG] pgmap v121: 164 pgs: 164 active+clean; 455 KiB data, 320 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:09.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:08 vm07 bash[28052]: cluster 2026-03-09T21:19:07.751015+0000 mgr.y (mgr.24416) 124 : cluster [DBG] pgmap v121: 164 pgs: 164 active+clean; 455 KiB data, 320 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:09.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:08 vm10 bash[23387]: cluster 2026-03-09T21:19:07.686627+0000 mon.a (mon.0) 885 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-09T21:19:09.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:08 vm10 bash[23387]: cluster 2026-03-09T21:19:07.686627+0000 mon.a (mon.0) 885 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-09T21:19:09.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:08 vm10 bash[23387]: cluster 2026-03-09T21:19:07.751015+0000 mgr.y (mgr.24416) 124 : cluster [DBG] pgmap v121: 164 pgs: 164 active+clean; 455 KiB data, 320 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:09.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:08 vm10 bash[23387]: cluster 2026-03-09T21:19:07.751015+0000 mgr.y (mgr.24416) 124 : cluster [DBG] pgmap v121: 164 pgs: 164 active+clean; 455 KiB data, 320 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:10.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:09 vm07 bash[20771]: cluster 2026-03-09T21:19:08.695431+0000 mon.a (mon.0) 886 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:19:10.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:09 vm07 bash[20771]: cluster 2026-03-09T21:19:08.695431+0000 mon.a (mon.0) 886 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:19:10.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:09 vm07 bash[20771]: cluster 2026-03-09T21:19:08.741208+0000 mon.a (mon.0) 887 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-09T21:19:10.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:09 vm07 bash[20771]: cluster 2026-03-09T21:19:08.741208+0000 mon.a (mon.0) 887 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-09T21:19:10.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:09 vm07 bash[28052]: cluster 2026-03-09T21:19:08.695431+0000 mon.a (mon.0) 886 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:19:10.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:09 vm07 bash[28052]: cluster 2026-03-09T21:19:08.695431+0000 mon.a (mon.0) 886 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:19:10.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:09 vm07 bash[28052]: cluster 2026-03-09T21:19:08.741208+0000 mon.a (mon.0) 887 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-09T21:19:10.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:09 vm07 bash[28052]: cluster 2026-03-09T21:19:08.741208+0000 mon.a (mon.0) 887 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-09T21:19:10.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:09 vm10 bash[23387]: cluster 2026-03-09T21:19:08.695431+0000 mon.a (mon.0) 886 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:19:10.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:09 vm10 bash[23387]: cluster 2026-03-09T21:19:08.695431+0000 mon.a (mon.0) 886 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:19:10.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:09 vm10 bash[23387]: cluster 2026-03-09T21:19:08.741208+0000 mon.a (mon.0) 887 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-09T21:19:10.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:09 vm10 bash[23387]: cluster 2026-03-09T21:19:08.741208+0000 mon.a (mon.0) 887 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-09T21:19:11.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:10 vm10 bash[23387]: cluster 2026-03-09T21:19:09.728979+0000 mon.a (mon.0) 888 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-09T21:19:11.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:10 vm10 bash[23387]: cluster 2026-03-09T21:19:09.728979+0000 mon.a (mon.0) 888 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-09T21:19:11.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:10 vm10 bash[23387]: cluster 2026-03-09T21:19:09.753872+0000 mgr.y (mgr.24416) 125 : cluster [DBG] pgmap v124: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 320 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:11.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:10 vm10 bash[23387]: cluster 2026-03-09T21:19:09.753872+0000 mgr.y (mgr.24416) 125 : cluster [DBG] pgmap v124: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 320 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:11.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:10 vm10 bash[23387]: audit 2026-03-09T21:19:09.781643+0000 mon.a (mon.0) 889 : audit [INF] from='client.? 192.168.123.107:0/3755522615' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:11.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:10 vm10 bash[23387]: audit 2026-03-09T21:19:09.781643+0000 mon.a (mon.0) 889 : audit [INF] from='client.? 192.168.123.107:0/3755522615' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:11.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:10 vm07 bash[20771]: cluster 2026-03-09T21:19:09.728979+0000 mon.a (mon.0) 888 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-09T21:19:11.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:10 vm07 bash[20771]: cluster 2026-03-09T21:19:09.728979+0000 mon.a (mon.0) 888 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-09T21:19:11.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:10 vm07 bash[20771]: cluster 2026-03-09T21:19:09.753872+0000 mgr.y (mgr.24416) 125 : cluster [DBG] pgmap v124: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 320 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:11.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:10 vm07 bash[20771]: cluster 2026-03-09T21:19:09.753872+0000 mgr.y (mgr.24416) 125 : cluster [DBG] pgmap v124: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 320 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:11.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:10 vm07 bash[20771]: audit 2026-03-09T21:19:09.781643+0000 mon.a (mon.0) 889 : audit [INF] from='client.? 192.168.123.107:0/3755522615' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:11.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:10 vm07 bash[20771]: audit 2026-03-09T21:19:09.781643+0000 mon.a (mon.0) 889 : audit [INF] from='client.? 192.168.123.107:0/3755522615' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:11.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:10 vm07 bash[28052]: cluster 2026-03-09T21:19:09.728979+0000 mon.a (mon.0) 888 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-09T21:19:11.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:10 vm07 bash[28052]: cluster 2026-03-09T21:19:09.728979+0000 mon.a (mon.0) 888 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-09T21:19:11.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:10 vm07 bash[28052]: cluster 2026-03-09T21:19:09.753872+0000 mgr.y (mgr.24416) 125 : cluster [DBG] pgmap v124: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 320 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:11.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:10 vm07 bash[28052]: cluster 2026-03-09T21:19:09.753872+0000 mgr.y (mgr.24416) 125 : cluster [DBG] pgmap v124: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 320 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:11.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:10 vm07 bash[28052]: audit 2026-03-09T21:19:09.781643+0000 mon.a (mon.0) 889 : audit [INF] from='client.? 192.168.123.107:0/3755522615' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:11.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:10 vm07 bash[28052]: audit 2026-03-09T21:19:09.781643+0000 mon.a (mon.0) 889 : audit [INF] from='client.? 192.168.123.107:0/3755522615' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:11.828 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_append PASSED [ 28%] 2026-03-09T21:19:12.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:11 vm07 bash[20771]: audit 2026-03-09T21:19:10.818119+0000 mon.a (mon.0) 890 : audit [INF] from='client.? 192.168.123.107:0/3755522615' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:12.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:11 vm07 bash[20771]: audit 2026-03-09T21:19:10.818119+0000 mon.a (mon.0) 890 : audit [INF] from='client.? 192.168.123.107:0/3755522615' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:12.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:11 vm07 bash[20771]: cluster 2026-03-09T21:19:10.919452+0000 mon.a (mon.0) 891 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-09T21:19:12.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:11 vm07 bash[20771]: cluster 2026-03-09T21:19:10.919452+0000 mon.a (mon.0) 891 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-09T21:19:12.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:11 vm07 bash[20771]: cluster 2026-03-09T21:19:11.754273+0000 mgr.y (mgr.24416) 126 : cluster [DBG] pgmap v126: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 320 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:12.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:11 vm07 bash[20771]: cluster 2026-03-09T21:19:11.754273+0000 mgr.y (mgr.24416) 126 : cluster [DBG] pgmap v126: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 320 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:12.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:11 vm07 bash[20771]: cluster 2026-03-09T21:19:11.826503+0000 mon.a (mon.0) 892 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-09T21:19:12.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:11 vm07 bash[20771]: cluster 2026-03-09T21:19:11.826503+0000 mon.a (mon.0) 892 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-09T21:19:12.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:11 vm07 bash[28052]: audit 2026-03-09T21:19:10.818119+0000 mon.a (mon.0) 890 : audit [INF] from='client.? 192.168.123.107:0/3755522615' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:12.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:11 vm07 bash[28052]: audit 2026-03-09T21:19:10.818119+0000 mon.a (mon.0) 890 : audit [INF] from='client.? 192.168.123.107:0/3755522615' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:12.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:11 vm07 bash[28052]: cluster 2026-03-09T21:19:10.919452+0000 mon.a (mon.0) 891 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-09T21:19:12.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:11 vm07 bash[28052]: cluster 2026-03-09T21:19:10.919452+0000 mon.a (mon.0) 891 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-09T21:19:12.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:11 vm07 bash[28052]: cluster 2026-03-09T21:19:11.754273+0000 mgr.y (mgr.24416) 126 : cluster [DBG] pgmap v126: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 320 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:12.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:11 vm07 bash[28052]: cluster 2026-03-09T21:19:11.754273+0000 mgr.y (mgr.24416) 126 : cluster [DBG] pgmap v126: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 320 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:12.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:11 vm07 bash[28052]: cluster 2026-03-09T21:19:11.826503+0000 mon.a (mon.0) 892 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-09T21:19:12.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:11 vm07 bash[28052]: cluster 2026-03-09T21:19:11.826503+0000 mon.a (mon.0) 892 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-09T21:19:12.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:11 vm10 bash[23387]: audit 2026-03-09T21:19:10.818119+0000 mon.a (mon.0) 890 : audit [INF] from='client.? 192.168.123.107:0/3755522615' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:12.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:11 vm10 bash[23387]: audit 2026-03-09T21:19:10.818119+0000 mon.a (mon.0) 890 : audit [INF] from='client.? 192.168.123.107:0/3755522615' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:12.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:11 vm10 bash[23387]: cluster 2026-03-09T21:19:10.919452+0000 mon.a (mon.0) 891 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-09T21:19:12.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:11 vm10 bash[23387]: cluster 2026-03-09T21:19:10.919452+0000 mon.a (mon.0) 891 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-09T21:19:12.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:11 vm10 bash[23387]: cluster 2026-03-09T21:19:11.754273+0000 mgr.y (mgr.24416) 126 : cluster [DBG] pgmap v126: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 320 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:12.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:11 vm10 bash[23387]: cluster 2026-03-09T21:19:11.754273+0000 mgr.y (mgr.24416) 126 : cluster [DBG] pgmap v126: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 320 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:12.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:11 vm10 bash[23387]: cluster 2026-03-09T21:19:11.826503+0000 mon.a (mon.0) 892 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-09T21:19:12.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:11 vm10 bash[23387]: cluster 2026-03-09T21:19:11.826503+0000 mon.a (mon.0) 892 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-09T21:19:13.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:12 vm07 bash[20771]: audit 2026-03-09T21:19:11.941515+0000 mon.c (mon.2) 72 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:19:13.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:12 vm07 bash[20771]: audit 2026-03-09T21:19:11.941515+0000 mon.c (mon.2) 72 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:19:13.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:12 vm07 bash[20771]: cluster 2026-03-09T21:19:12.830850+0000 mon.a (mon.0) 893 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-09T21:19:13.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:12 vm07 bash[20771]: cluster 2026-03-09T21:19:12.830850+0000 mon.a (mon.0) 893 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-09T21:19:13.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:13 vm07 bash[28052]: audit 2026-03-09T21:19:11.941515+0000 mon.c (mon.2) 72 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:19:13.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:13 vm07 bash[28052]: audit 2026-03-09T21:19:11.941515+0000 mon.c (mon.2) 72 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:19:13.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:13 vm07 bash[28052]: cluster 2026-03-09T21:19:12.830850+0000 mon.a (mon.0) 893 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-09T21:19:13.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:13 vm07 bash[28052]: cluster 2026-03-09T21:19:12.830850+0000 mon.a (mon.0) 893 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-09T21:19:13.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:12 vm10 bash[23387]: audit 2026-03-09T21:19:11.941515+0000 mon.c (mon.2) 72 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:19:13.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:12 vm10 bash[23387]: audit 2026-03-09T21:19:11.941515+0000 mon.c (mon.2) 72 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:19:13.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:12 vm10 bash[23387]: cluster 2026-03-09T21:19:12.830850+0000 mon.a (mon.0) 893 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-09T21:19:13.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:12 vm10 bash[23387]: cluster 2026-03-09T21:19:12.830850+0000 mon.a (mon.0) 893 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-09T21:19:15.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:14 vm07 bash[20771]: cluster 2026-03-09T21:19:13.754750+0000 mgr.y (mgr.24416) 127 : cluster [DBG] pgmap v129: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:15.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:14 vm07 bash[20771]: cluster 2026-03-09T21:19:13.754750+0000 mgr.y (mgr.24416) 127 : cluster [DBG] pgmap v129: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:15.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:14 vm07 bash[20771]: cluster 2026-03-09T21:19:13.834015+0000 mon.a (mon.0) 894 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-09T21:19:15.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:14 vm07 bash[20771]: cluster 2026-03-09T21:19:13.834015+0000 mon.a (mon.0) 894 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-09T21:19:15.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:14 vm07 bash[20771]: audit 2026-03-09T21:19:13.881349+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.107:0/699575657' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:15.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:14 vm07 bash[20771]: audit 2026-03-09T21:19:13.881349+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.107:0/699575657' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:15.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:14 vm07 bash[20771]: audit 2026-03-09T21:19:13.881786+0000 mon.a (mon.0) 895 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:15.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:14 vm07 bash[20771]: audit 2026-03-09T21:19:13.881786+0000 mon.a (mon.0) 895 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:15.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:14 vm07 bash[20771]: cluster 2026-03-09T21:19:14.736865+0000 mon.a (mon.0) 896 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:19:15.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:14 vm07 bash[20771]: cluster 2026-03-09T21:19:14.736865+0000 mon.a (mon.0) 896 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:19:15.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:14 vm07 bash[28052]: cluster 2026-03-09T21:19:13.754750+0000 mgr.y (mgr.24416) 127 : cluster [DBG] pgmap v129: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:15.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:14 vm07 bash[28052]: cluster 2026-03-09T21:19:13.754750+0000 mgr.y (mgr.24416) 127 : cluster [DBG] pgmap v129: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:15.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:14 vm07 bash[28052]: cluster 2026-03-09T21:19:13.834015+0000 mon.a (mon.0) 894 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-09T21:19:15.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:14 vm07 bash[28052]: cluster 2026-03-09T21:19:13.834015+0000 mon.a (mon.0) 894 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-09T21:19:15.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:14 vm07 bash[28052]: audit 2026-03-09T21:19:13.881349+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.107:0/699575657' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:15.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:14 vm07 bash[28052]: audit 2026-03-09T21:19:13.881349+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.107:0/699575657' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:15.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:14 vm07 bash[28052]: audit 2026-03-09T21:19:13.881786+0000 mon.a (mon.0) 895 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:15.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:14 vm07 bash[28052]: audit 2026-03-09T21:19:13.881786+0000 mon.a (mon.0) 895 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:15.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:14 vm07 bash[28052]: cluster 2026-03-09T21:19:14.736865+0000 mon.a (mon.0) 896 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:19:15.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:14 vm07 bash[28052]: cluster 2026-03-09T21:19:14.736865+0000 mon.a (mon.0) 896 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:19:15.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:14 vm10 bash[23387]: cluster 2026-03-09T21:19:13.754750+0000 mgr.y (mgr.24416) 127 : cluster [DBG] pgmap v129: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:15.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:14 vm10 bash[23387]: cluster 2026-03-09T21:19:13.754750+0000 mgr.y (mgr.24416) 127 : cluster [DBG] pgmap v129: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:15.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:14 vm10 bash[23387]: cluster 2026-03-09T21:19:13.834015+0000 mon.a (mon.0) 894 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-09T21:19:15.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:14 vm10 bash[23387]: cluster 2026-03-09T21:19:13.834015+0000 mon.a (mon.0) 894 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-09T21:19:15.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:14 vm10 bash[23387]: audit 2026-03-09T21:19:13.881349+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.107:0/699575657' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:15.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:14 vm10 bash[23387]: audit 2026-03-09T21:19:13.881349+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.107:0/699575657' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:15.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:14 vm10 bash[23387]: audit 2026-03-09T21:19:13.881786+0000 mon.a (mon.0) 895 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:15.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:14 vm10 bash[23387]: audit 2026-03-09T21:19:13.881786+0000 mon.a (mon.0) 895 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:15.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:14 vm10 bash[23387]: cluster 2026-03-09T21:19:14.736865+0000 mon.a (mon.0) 896 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:19:15.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:14 vm10 bash[23387]: cluster 2026-03-09T21:19:14.736865+0000 mon.a (mon.0) 896 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:19:15.900 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_write_zeros PASSED [ 29%] 2026-03-09T21:19:16.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:15 vm10 bash[23387]: audit 2026-03-09T21:19:14.874816+0000 mon.a (mon.0) 897 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:16.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:15 vm10 bash[23387]: audit 2026-03-09T21:19:14.874816+0000 mon.a (mon.0) 897 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:16.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:15 vm10 bash[23387]: cluster 2026-03-09T21:19:14.884063+0000 mon.a (mon.0) 898 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-09T21:19:16.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:15 vm10 bash[23387]: cluster 2026-03-09T21:19:14.884063+0000 mon.a (mon.0) 898 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-09T21:19:16.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:15 vm07 bash[20771]: audit 2026-03-09T21:19:14.874816+0000 mon.a (mon.0) 897 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:16.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:15 vm07 bash[20771]: audit 2026-03-09T21:19:14.874816+0000 mon.a (mon.0) 897 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:16.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:15 vm07 bash[20771]: cluster 2026-03-09T21:19:14.884063+0000 mon.a (mon.0) 898 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-09T21:19:16.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:15 vm07 bash[20771]: cluster 2026-03-09T21:19:14.884063+0000 mon.a (mon.0) 898 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-09T21:19:16.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:15 vm07 bash[28052]: audit 2026-03-09T21:19:14.874816+0000 mon.a (mon.0) 897 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:16.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:15 vm07 bash[28052]: audit 2026-03-09T21:19:14.874816+0000 mon.a (mon.0) 897 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:16.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:15 vm07 bash[28052]: cluster 2026-03-09T21:19:14.884063+0000 mon.a (mon.0) 898 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-09T21:19:16.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:15 vm07 bash[28052]: cluster 2026-03-09T21:19:14.884063+0000 mon.a (mon.0) 898 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-09T21:19:16.692 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:19:16 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:19:17.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:16 vm10 bash[23387]: cluster 2026-03-09T21:19:15.755165+0000 mgr.y (mgr.24416) 128 : cluster [DBG] pgmap v132: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:17.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:16 vm10 bash[23387]: cluster 2026-03-09T21:19:15.755165+0000 mgr.y (mgr.24416) 128 : cluster [DBG] pgmap v132: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:17.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:16 vm10 bash[23387]: cluster 2026-03-09T21:19:15.898561+0000 mon.a (mon.0) 899 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-09T21:19:17.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:16 vm10 bash[23387]: cluster 2026-03-09T21:19:15.898561+0000 mon.a (mon.0) 899 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-09T21:19:17.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:16 vm07 bash[20771]: cluster 2026-03-09T21:19:15.755165+0000 mgr.y (mgr.24416) 128 : cluster [DBG] pgmap v132: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:17.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:16 vm07 bash[20771]: cluster 2026-03-09T21:19:15.755165+0000 mgr.y (mgr.24416) 128 : cluster [DBG] pgmap v132: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:17.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:16 vm07 bash[20771]: cluster 2026-03-09T21:19:15.898561+0000 mon.a (mon.0) 899 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-09T21:19:17.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:16 vm07 bash[20771]: cluster 2026-03-09T21:19:15.898561+0000 mon.a (mon.0) 899 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-09T21:19:17.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:16 vm07 bash[28052]: cluster 2026-03-09T21:19:15.755165+0000 mgr.y (mgr.24416) 128 : cluster [DBG] pgmap v132: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:17.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:16 vm07 bash[28052]: cluster 2026-03-09T21:19:15.755165+0000 mgr.y (mgr.24416) 128 : cluster [DBG] pgmap v132: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:17.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:16 vm07 bash[28052]: cluster 2026-03-09T21:19:15.898561+0000 mon.a (mon.0) 899 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-09T21:19:17.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:16 vm07 bash[28052]: cluster 2026-03-09T21:19:15.898561+0000 mon.a (mon.0) 899 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-09T21:19:18.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:17 vm07 bash[28052]: audit 2026-03-09T21:19:16.292724+0000 mgr.y (mgr.24416) 129 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:18.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:17 vm07 bash[28052]: audit 2026-03-09T21:19:16.292724+0000 mgr.y (mgr.24416) 129 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:18.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:17 vm07 bash[28052]: cluster 2026-03-09T21:19:16.959338+0000 mon.a (mon.0) 900 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-09T21:19:18.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:17 vm07 bash[28052]: cluster 2026-03-09T21:19:16.959338+0000 mon.a (mon.0) 900 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-09T21:19:18.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:17 vm07 bash[28052]: cluster 2026-03-09T21:19:17.755615+0000 mgr.y (mgr.24416) 130 : cluster [DBG] pgmap v135: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:18.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:17 vm07 bash[28052]: cluster 2026-03-09T21:19:17.755615+0000 mgr.y (mgr.24416) 130 : cluster [DBG] pgmap v135: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:18.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:17 vm07 bash[20771]: audit 2026-03-09T21:19:16.292724+0000 mgr.y (mgr.24416) 129 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:18.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:17 vm07 bash[20771]: audit 2026-03-09T21:19:16.292724+0000 mgr.y (mgr.24416) 129 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:18.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:17 vm07 bash[20771]: cluster 2026-03-09T21:19:16.959338+0000 mon.a (mon.0) 900 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-09T21:19:18.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:17 vm07 bash[20771]: cluster 2026-03-09T21:19:16.959338+0000 mon.a (mon.0) 900 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-09T21:19:18.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:17 vm07 bash[20771]: cluster 2026-03-09T21:19:17.755615+0000 mgr.y (mgr.24416) 130 : cluster [DBG] pgmap v135: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:18.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:17 vm07 bash[20771]: cluster 2026-03-09T21:19:17.755615+0000 mgr.y (mgr.24416) 130 : cluster [DBG] pgmap v135: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:18.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:17 vm10 bash[23387]: audit 2026-03-09T21:19:16.292724+0000 mgr.y (mgr.24416) 129 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:18.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:17 vm10 bash[23387]: audit 2026-03-09T21:19:16.292724+0000 mgr.y (mgr.24416) 129 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:18.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:17 vm10 bash[23387]: cluster 2026-03-09T21:19:16.959338+0000 mon.a (mon.0) 900 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-09T21:19:18.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:17 vm10 bash[23387]: cluster 2026-03-09T21:19:16.959338+0000 mon.a (mon.0) 900 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-09T21:19:18.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:17 vm10 bash[23387]: cluster 2026-03-09T21:19:17.755615+0000 mgr.y (mgr.24416) 130 : cluster [DBG] pgmap v135: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:18.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:17 vm10 bash[23387]: cluster 2026-03-09T21:19:17.755615+0000 mgr.y (mgr.24416) 130 : cluster [DBG] pgmap v135: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:18.960 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:19:18 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:19:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:19:19.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:18 vm07 bash[20771]: cluster 2026-03-09T21:19:17.959631+0000 mon.a (mon.0) 901 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-09T21:19:19.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:18 vm07 bash[20771]: cluster 2026-03-09T21:19:17.959631+0000 mon.a (mon.0) 901 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-09T21:19:19.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:18 vm07 bash[20771]: audit 2026-03-09T21:19:18.007393+0000 mon.a (mon.0) 902 : audit [INF] from='client.? 192.168.123.107:0/1152905152' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:19.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:18 vm07 bash[20771]: audit 2026-03-09T21:19:18.007393+0000 mon.a (mon.0) 902 : audit [INF] from='client.? 192.168.123.107:0/1152905152' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:19.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:18 vm07 bash[28052]: cluster 2026-03-09T21:19:17.959631+0000 mon.a (mon.0) 901 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-09T21:19:19.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:18 vm07 bash[28052]: cluster 2026-03-09T21:19:17.959631+0000 mon.a (mon.0) 901 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-09T21:19:19.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:18 vm07 bash[28052]: audit 2026-03-09T21:19:18.007393+0000 mon.a (mon.0) 902 : audit [INF] from='client.? 192.168.123.107:0/1152905152' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:19.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:18 vm07 bash[28052]: audit 2026-03-09T21:19:18.007393+0000 mon.a (mon.0) 902 : audit [INF] from='client.? 192.168.123.107:0/1152905152' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:19.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:18 vm10 bash[23387]: cluster 2026-03-09T21:19:17.959631+0000 mon.a (mon.0) 901 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-09T21:19:19.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:18 vm10 bash[23387]: cluster 2026-03-09T21:19:17.959631+0000 mon.a (mon.0) 901 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-09T21:19:19.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:18 vm10 bash[23387]: audit 2026-03-09T21:19:18.007393+0000 mon.a (mon.0) 902 : audit [INF] from='client.? 192.168.123.107:0/1152905152' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:19.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:18 vm10 bash[23387]: audit 2026-03-09T21:19:18.007393+0000 mon.a (mon.0) 902 : audit [INF] from='client.? 192.168.123.107:0/1152905152' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:19.982 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_trunc PASSED [ 30%] 2026-03-09T21:19:20.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:19 vm07 bash[20771]: audit 2026-03-09T21:19:18.971151+0000 mon.a (mon.0) 903 : audit [INF] from='client.? 192.168.123.107:0/1152905152' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:20.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:19 vm07 bash[20771]: audit 2026-03-09T21:19:18.971151+0000 mon.a (mon.0) 903 : audit [INF] from='client.? 192.168.123.107:0/1152905152' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:20.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:19 vm07 bash[20771]: cluster 2026-03-09T21:19:18.972965+0000 mon.a (mon.0) 904 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-09T21:19:20.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:19 vm07 bash[20771]: cluster 2026-03-09T21:19:18.972965+0000 mon.a (mon.0) 904 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-09T21:19:20.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:19 vm07 bash[20771]: cluster 2026-03-09T21:19:19.755930+0000 mgr.y (mgr.24416) 131 : cluster [DBG] pgmap v138: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:20.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:19 vm07 bash[20771]: cluster 2026-03-09T21:19:19.755930+0000 mgr.y (mgr.24416) 131 : cluster [DBG] pgmap v138: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:20.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:19 vm07 bash[28052]: audit 2026-03-09T21:19:18.971151+0000 mon.a (mon.0) 903 : audit [INF] from='client.? 192.168.123.107:0/1152905152' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:20.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:19 vm07 bash[28052]: audit 2026-03-09T21:19:18.971151+0000 mon.a (mon.0) 903 : audit [INF] from='client.? 192.168.123.107:0/1152905152' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:20.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:19 vm07 bash[28052]: cluster 2026-03-09T21:19:18.972965+0000 mon.a (mon.0) 904 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-09T21:19:20.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:19 vm07 bash[28052]: cluster 2026-03-09T21:19:18.972965+0000 mon.a (mon.0) 904 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-09T21:19:20.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:19 vm07 bash[28052]: cluster 2026-03-09T21:19:19.755930+0000 mgr.y (mgr.24416) 131 : cluster [DBG] pgmap v138: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:20.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:19 vm07 bash[28052]: cluster 2026-03-09T21:19:19.755930+0000 mgr.y (mgr.24416) 131 : cluster [DBG] pgmap v138: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:20.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:19 vm10 bash[23387]: audit 2026-03-09T21:19:18.971151+0000 mon.a (mon.0) 903 : audit [INF] from='client.? 192.168.123.107:0/1152905152' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:20.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:19 vm10 bash[23387]: audit 2026-03-09T21:19:18.971151+0000 mon.a (mon.0) 903 : audit [INF] from='client.? 192.168.123.107:0/1152905152' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:20.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:19 vm10 bash[23387]: cluster 2026-03-09T21:19:18.972965+0000 mon.a (mon.0) 904 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-09T21:19:20.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:19 vm10 bash[23387]: cluster 2026-03-09T21:19:18.972965+0000 mon.a (mon.0) 904 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-09T21:19:20.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:19 vm10 bash[23387]: cluster 2026-03-09T21:19:19.755930+0000 mgr.y (mgr.24416) 131 : cluster [DBG] pgmap v138: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:20.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:19 vm10 bash[23387]: cluster 2026-03-09T21:19:19.755930+0000 mgr.y (mgr.24416) 131 : cluster [DBG] pgmap v138: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:21.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:21 vm07 bash[20771]: cluster 2026-03-09T21:19:19.976851+0000 mon.a (mon.0) 905 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-09T21:19:21.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:21 vm07 bash[20771]: cluster 2026-03-09T21:19:19.976851+0000 mon.a (mon.0) 905 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-09T21:19:21.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:21 vm07 bash[28052]: cluster 2026-03-09T21:19:19.976851+0000 mon.a (mon.0) 905 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-09T21:19:21.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:21 vm07 bash[28052]: cluster 2026-03-09T21:19:19.976851+0000 mon.a (mon.0) 905 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-09T21:19:21.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:21 vm10 bash[23387]: cluster 2026-03-09T21:19:19.976851+0000 mon.a (mon.0) 905 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-09T21:19:21.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:21 vm10 bash[23387]: cluster 2026-03-09T21:19:19.976851+0000 mon.a (mon.0) 905 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-09T21:19:22.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:22 vm07 bash[20771]: cluster 2026-03-09T21:19:21.103537+0000 mon.a (mon.0) 906 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-09T21:19:22.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:22 vm07 bash[20771]: cluster 2026-03-09T21:19:21.103537+0000 mon.a (mon.0) 906 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-09T21:19:22.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:22 vm07 bash[20771]: cluster 2026-03-09T21:19:21.756276+0000 mgr.y (mgr.24416) 132 : cluster [DBG] pgmap v141: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:22.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:22 vm07 bash[20771]: cluster 2026-03-09T21:19:21.756276+0000 mgr.y (mgr.24416) 132 : cluster [DBG] pgmap v141: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:22.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:22 vm07 bash[28052]: cluster 2026-03-09T21:19:21.103537+0000 mon.a (mon.0) 906 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-09T21:19:22.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:22 vm07 bash[28052]: cluster 2026-03-09T21:19:21.103537+0000 mon.a (mon.0) 906 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-09T21:19:22.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:22 vm07 bash[28052]: cluster 2026-03-09T21:19:21.756276+0000 mgr.y (mgr.24416) 132 : cluster [DBG] pgmap v141: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:22.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:22 vm07 bash[28052]: cluster 2026-03-09T21:19:21.756276+0000 mgr.y (mgr.24416) 132 : cluster [DBG] pgmap v141: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:22.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:22 vm10 bash[23387]: cluster 2026-03-09T21:19:21.103537+0000 mon.a (mon.0) 906 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-09T21:19:22.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:22 vm10 bash[23387]: cluster 2026-03-09T21:19:21.103537+0000 mon.a (mon.0) 906 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-09T21:19:22.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:22 vm10 bash[23387]: cluster 2026-03-09T21:19:21.756276+0000 mgr.y (mgr.24416) 132 : cluster [DBG] pgmap v141: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:22.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:22 vm10 bash[23387]: cluster 2026-03-09T21:19:21.756276+0000 mgr.y (mgr.24416) 132 : cluster [DBG] pgmap v141: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:23.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:23 vm07 bash[20771]: cluster 2026-03-09T21:19:22.251948+0000 mon.a (mon.0) 907 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-09T21:19:23.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:23 vm07 bash[20771]: cluster 2026-03-09T21:19:22.251948+0000 mon.a (mon.0) 907 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-09T21:19:23.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:23 vm07 bash[20771]: audit 2026-03-09T21:19:22.294142+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.107:0/319790184' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:23.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:23 vm07 bash[20771]: audit 2026-03-09T21:19:22.294142+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.107:0/319790184' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:23.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:23 vm07 bash[20771]: audit 2026-03-09T21:19:22.294508+0000 mon.a (mon.0) 908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:23.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:23 vm07 bash[20771]: audit 2026-03-09T21:19:22.294508+0000 mon.a (mon.0) 908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:23.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:23 vm07 bash[28052]: cluster 2026-03-09T21:19:22.251948+0000 mon.a (mon.0) 907 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-09T21:19:23.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:23 vm07 bash[28052]: cluster 2026-03-09T21:19:22.251948+0000 mon.a (mon.0) 907 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-09T21:19:23.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:23 vm07 bash[28052]: audit 2026-03-09T21:19:22.294142+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.107:0/319790184' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:23.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:23 vm07 bash[28052]: audit 2026-03-09T21:19:22.294142+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.107:0/319790184' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:23.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:23 vm07 bash[28052]: audit 2026-03-09T21:19:22.294508+0000 mon.a (mon.0) 908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:23.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:23 vm07 bash[28052]: audit 2026-03-09T21:19:22.294508+0000 mon.a (mon.0) 908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:23.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:23 vm10 bash[23387]: cluster 2026-03-09T21:19:22.251948+0000 mon.a (mon.0) 907 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-09T21:19:23.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:23 vm10 bash[23387]: cluster 2026-03-09T21:19:22.251948+0000 mon.a (mon.0) 907 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-09T21:19:23.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:23 vm10 bash[23387]: audit 2026-03-09T21:19:22.294142+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.107:0/319790184' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:23.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:23 vm10 bash[23387]: audit 2026-03-09T21:19:22.294142+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.107:0/319790184' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:23.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:23 vm10 bash[23387]: audit 2026-03-09T21:19:22.294508+0000 mon.a (mon.0) 908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:23.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:23 vm10 bash[23387]: audit 2026-03-09T21:19:22.294508+0000 mon.a (mon.0) 908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:24.269 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_cmpext PASSED [ 31%] 2026-03-09T21:19:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:24 vm07 bash[20771]: audit 2026-03-09T21:19:23.244889+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:24 vm07 bash[20771]: audit 2026-03-09T21:19:23.244889+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:24 vm07 bash[20771]: cluster 2026-03-09T21:19:23.250785+0000 mon.a (mon.0) 910 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-09T21:19:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:24 vm07 bash[20771]: cluster 2026-03-09T21:19:23.250785+0000 mon.a (mon.0) 910 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-09T21:19:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:24 vm07 bash[20771]: audit 2026-03-09T21:19:23.561074+0000 mon.c (mon.2) 73 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:19:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:24 vm07 bash[20771]: audit 2026-03-09T21:19:23.561074+0000 mon.c (mon.2) 73 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:19:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:24 vm07 bash[20771]: cluster 2026-03-09T21:19:23.756687+0000 mgr.y (mgr.24416) 133 : cluster [DBG] pgmap v144: 196 pgs: 196 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:19:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:24 vm07 bash[20771]: cluster 2026-03-09T21:19:23.756687+0000 mgr.y (mgr.24416) 133 : cluster [DBG] pgmap v144: 196 pgs: 196 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:19:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:24 vm07 bash[20771]: audit 2026-03-09T21:19:23.913737+0000 mon.c (mon.2) 74 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:19:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:24 vm07 bash[20771]: audit 2026-03-09T21:19:23.913737+0000 mon.c (mon.2) 74 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:19:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:24 vm07 bash[20771]: audit 2026-03-09T21:19:23.914769+0000 mon.c (mon.2) 75 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:19:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:24 vm07 bash[20771]: audit 2026-03-09T21:19:23.914769+0000 mon.c (mon.2) 75 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:19:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:24 vm07 bash[20771]: audit 2026-03-09T21:19:23.979357+0000 mon.a (mon.0) 911 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:19:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:24 vm07 bash[20771]: audit 2026-03-09T21:19:23.979357+0000 mon.a (mon.0) 911 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:19:24.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:24 vm07 bash[28052]: audit 2026-03-09T21:19:23.244889+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:24.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:24 vm07 bash[28052]: audit 2026-03-09T21:19:23.244889+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:24.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:24 vm07 bash[28052]: cluster 2026-03-09T21:19:23.250785+0000 mon.a (mon.0) 910 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-09T21:19:24.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:24 vm07 bash[28052]: cluster 2026-03-09T21:19:23.250785+0000 mon.a (mon.0) 910 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-09T21:19:24.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:24 vm07 bash[28052]: audit 2026-03-09T21:19:23.561074+0000 mon.c (mon.2) 73 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:19:24.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:24 vm07 bash[28052]: audit 2026-03-09T21:19:23.561074+0000 mon.c (mon.2) 73 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:19:24.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:24 vm07 bash[28052]: cluster 2026-03-09T21:19:23.756687+0000 mgr.y (mgr.24416) 133 : cluster [DBG] pgmap v144: 196 pgs: 196 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:19:24.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:24 vm07 bash[28052]: cluster 2026-03-09T21:19:23.756687+0000 mgr.y (mgr.24416) 133 : cluster [DBG] pgmap v144: 196 pgs: 196 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:19:24.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:24 vm07 bash[28052]: audit 2026-03-09T21:19:23.913737+0000 mon.c (mon.2) 74 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:19:24.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:24 vm07 bash[28052]: audit 2026-03-09T21:19:23.913737+0000 mon.c (mon.2) 74 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:19:24.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:24 vm07 bash[28052]: audit 2026-03-09T21:19:23.914769+0000 mon.c (mon.2) 75 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:19:24.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:24 vm07 bash[28052]: audit 2026-03-09T21:19:23.914769+0000 mon.c (mon.2) 75 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:19:24.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:24 vm07 bash[28052]: audit 2026-03-09T21:19:23.979357+0000 mon.a (mon.0) 911 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:19:24.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:24 vm07 bash[28052]: audit 2026-03-09T21:19:23.979357+0000 mon.a (mon.0) 911 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:19:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:24 vm10 bash[23387]: audit 2026-03-09T21:19:23.244889+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:24 vm10 bash[23387]: audit 2026-03-09T21:19:23.244889+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:24 vm10 bash[23387]: cluster 2026-03-09T21:19:23.250785+0000 mon.a (mon.0) 910 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-09T21:19:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:24 vm10 bash[23387]: cluster 2026-03-09T21:19:23.250785+0000 mon.a (mon.0) 910 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-09T21:19:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:24 vm10 bash[23387]: audit 2026-03-09T21:19:23.561074+0000 mon.c (mon.2) 73 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:19:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:24 vm10 bash[23387]: audit 2026-03-09T21:19:23.561074+0000 mon.c (mon.2) 73 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:19:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:24 vm10 bash[23387]: cluster 2026-03-09T21:19:23.756687+0000 mgr.y (mgr.24416) 133 : cluster [DBG] pgmap v144: 196 pgs: 196 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:19:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:24 vm10 bash[23387]: cluster 2026-03-09T21:19:23.756687+0000 mgr.y (mgr.24416) 133 : cluster [DBG] pgmap v144: 196 pgs: 196 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:19:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:24 vm10 bash[23387]: audit 2026-03-09T21:19:23.913737+0000 mon.c (mon.2) 74 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:19:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:24 vm10 bash[23387]: audit 2026-03-09T21:19:23.913737+0000 mon.c (mon.2) 74 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:19:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:24 vm10 bash[23387]: audit 2026-03-09T21:19:23.914769+0000 mon.c (mon.2) 75 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:19:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:24 vm10 bash[23387]: audit 2026-03-09T21:19:23.914769+0000 mon.c (mon.2) 75 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:19:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:24 vm10 bash[23387]: audit 2026-03-09T21:19:23.979357+0000 mon.a (mon.0) 911 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:19:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:24 vm10 bash[23387]: audit 2026-03-09T21:19:23.979357+0000 mon.a (mon.0) 911 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:19:25.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:25 vm07 bash[20771]: cluster 2026-03-09T21:19:24.266020+0000 mon.a (mon.0) 912 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-09T21:19:25.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:25 vm07 bash[20771]: cluster 2026-03-09T21:19:24.266020+0000 mon.a (mon.0) 912 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-09T21:19:25.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:25 vm07 bash[28052]: cluster 2026-03-09T21:19:24.266020+0000 mon.a (mon.0) 912 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-09T21:19:25.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:25 vm07 bash[28052]: cluster 2026-03-09T21:19:24.266020+0000 mon.a (mon.0) 912 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-09T21:19:25.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:25 vm10 bash[23387]: cluster 2026-03-09T21:19:24.266020+0000 mon.a (mon.0) 912 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-09T21:19:25.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:25 vm10 bash[23387]: cluster 2026-03-09T21:19:24.266020+0000 mon.a (mon.0) 912 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-09T21:19:26.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:26 vm07 bash[20771]: cluster 2026-03-09T21:19:25.289082+0000 mon.a (mon.0) 913 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-09T21:19:26.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:26 vm07 bash[20771]: cluster 2026-03-09T21:19:25.289082+0000 mon.a (mon.0) 913 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-09T21:19:26.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:26 vm07 bash[20771]: cluster 2026-03-09T21:19:25.757012+0000 mgr.y (mgr.24416) 134 : cluster [DBG] pgmap v147: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:26.659 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:26 vm07 bash[20771]: cluster 2026-03-09T21:19:25.757012+0000 mgr.y (mgr.24416) 134 : cluster [DBG] pgmap v147: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:26.659 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:26 vm07 bash[28052]: cluster 2026-03-09T21:19:25.289082+0000 mon.a (mon.0) 913 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-09T21:19:26.659 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:26 vm07 bash[28052]: cluster 2026-03-09T21:19:25.289082+0000 mon.a (mon.0) 913 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-09T21:19:26.659 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:26 vm07 bash[28052]: cluster 2026-03-09T21:19:25.757012+0000 mgr.y (mgr.24416) 134 : cluster [DBG] pgmap v147: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:26.659 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:26 vm07 bash[28052]: cluster 2026-03-09T21:19:25.757012+0000 mgr.y (mgr.24416) 134 : cluster [DBG] pgmap v147: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:26 vm10 bash[23387]: cluster 2026-03-09T21:19:25.289082+0000 mon.a (mon.0) 913 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-09T21:19:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:26 vm10 bash[23387]: cluster 2026-03-09T21:19:25.289082+0000 mon.a (mon.0) 913 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-09T21:19:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:26 vm10 bash[23387]: cluster 2026-03-09T21:19:25.757012+0000 mgr.y (mgr.24416) 134 : cluster [DBG] pgmap v147: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:26 vm10 bash[23387]: cluster 2026-03-09T21:19:25.757012+0000 mgr.y (mgr.24416) 134 : cluster [DBG] pgmap v147: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:26.692 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:19:26 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:19:27.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:27 vm07 bash[20771]: audit 2026-03-09T21:19:26.302704+0000 mgr.y (mgr.24416) 135 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:27.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:27 vm07 bash[20771]: audit 2026-03-09T21:19:26.302704+0000 mgr.y (mgr.24416) 135 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:27.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:27 vm07 bash[20771]: cluster 2026-03-09T21:19:26.309645+0000 mon.a (mon.0) 914 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-09T21:19:27.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:27 vm07 bash[20771]: cluster 2026-03-09T21:19:26.309645+0000 mon.a (mon.0) 914 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-09T21:19:27.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:27 vm07 bash[20771]: audit 2026-03-09T21:19:26.360751+0000 mon.c (mon.2) 76 : audit [INF] from='client.? 192.168.123.107:0/3517322409' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:27.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:27 vm07 bash[20771]: audit 2026-03-09T21:19:26.360751+0000 mon.c (mon.2) 76 : audit [INF] from='client.? 192.168.123.107:0/3517322409' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:27.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:27 vm07 bash[20771]: audit 2026-03-09T21:19:26.361215+0000 mon.a (mon.0) 915 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:27.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:27 vm07 bash[20771]: audit 2026-03-09T21:19:26.361215+0000 mon.a (mon.0) 915 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:27.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:27 vm07 bash[20771]: audit 2026-03-09T21:19:26.960128+0000 mon.c (mon.2) 77 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:19:27.616 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:27 vm07 bash[20771]: audit 2026-03-09T21:19:26.960128+0000 mon.c (mon.2) 77 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:19:27.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:27 vm07 bash[28052]: audit 2026-03-09T21:19:26.302704+0000 mgr.y (mgr.24416) 135 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:27.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:27 vm07 bash[28052]: audit 2026-03-09T21:19:26.302704+0000 mgr.y (mgr.24416) 135 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:27.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:27 vm07 bash[28052]: cluster 2026-03-09T21:19:26.309645+0000 mon.a (mon.0) 914 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-09T21:19:27.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:27 vm07 bash[28052]: cluster 2026-03-09T21:19:26.309645+0000 mon.a (mon.0) 914 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-09T21:19:27.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:27 vm07 bash[28052]: audit 2026-03-09T21:19:26.360751+0000 mon.c (mon.2) 76 : audit [INF] from='client.? 192.168.123.107:0/3517322409' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:27.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:27 vm07 bash[28052]: audit 2026-03-09T21:19:26.360751+0000 mon.c (mon.2) 76 : audit [INF] from='client.? 192.168.123.107:0/3517322409' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:27.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:27 vm07 bash[28052]: audit 2026-03-09T21:19:26.361215+0000 mon.a (mon.0) 915 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:27.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:27 vm07 bash[28052]: audit 2026-03-09T21:19:26.361215+0000 mon.a (mon.0) 915 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:27.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:27 vm07 bash[28052]: audit 2026-03-09T21:19:26.960128+0000 mon.c (mon.2) 77 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:19:27.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:27 vm07 bash[28052]: audit 2026-03-09T21:19:26.960128+0000 mon.c (mon.2) 77 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:19:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:27 vm10 bash[23387]: audit 2026-03-09T21:19:26.302704+0000 mgr.y (mgr.24416) 135 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:27 vm10 bash[23387]: audit 2026-03-09T21:19:26.302704+0000 mgr.y (mgr.24416) 135 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:27.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:27 vm10 bash[23387]: cluster 2026-03-09T21:19:26.309645+0000 mon.a (mon.0) 914 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-09T21:19:27.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:27 vm10 bash[23387]: cluster 2026-03-09T21:19:26.309645+0000 mon.a (mon.0) 914 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-09T21:19:27.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:27 vm10 bash[23387]: audit 2026-03-09T21:19:26.360751+0000 mon.c (mon.2) 76 : audit [INF] from='client.? 192.168.123.107:0/3517322409' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:27.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:27 vm10 bash[23387]: audit 2026-03-09T21:19:26.360751+0000 mon.c (mon.2) 76 : audit [INF] from='client.? 192.168.123.107:0/3517322409' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:27.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:27 vm10 bash[23387]: audit 2026-03-09T21:19:26.361215+0000 mon.a (mon.0) 915 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:27.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:27 vm10 bash[23387]: audit 2026-03-09T21:19:26.361215+0000 mon.a (mon.0) 915 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:27.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:27 vm10 bash[23387]: audit 2026-03-09T21:19:26.960128+0000 mon.c (mon.2) 77 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:19:27.693 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:27 vm10 bash[23387]: audit 2026-03-09T21:19:26.960128+0000 mon.c (mon.2) 77 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:19:28.395 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_list_objects_empty PASSED [ 32%] 2026-03-09T21:19:28.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:28 vm10 bash[23387]: audit 2026-03-09T21:19:27.312388+0000 mon.a (mon.0) 916 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:28.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:28 vm10 bash[23387]: audit 2026-03-09T21:19:27.312388+0000 mon.a (mon.0) 916 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:28.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:28 vm10 bash[23387]: cluster 2026-03-09T21:19:27.319935+0000 mon.a (mon.0) 917 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-09T21:19:28.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:28 vm10 bash[23387]: cluster 2026-03-09T21:19:27.319935+0000 mon.a (mon.0) 917 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-09T21:19:28.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:28 vm10 bash[23387]: cluster 2026-03-09T21:19:27.757499+0000 mgr.y (mgr.24416) 136 : cluster [DBG] pgmap v150: 196 pgs: 196 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T21:19:28.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:28 vm10 bash[23387]: cluster 2026-03-09T21:19:27.757499+0000 mgr.y (mgr.24416) 136 : cluster [DBG] pgmap v150: 196 pgs: 196 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T21:19:28.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:28 vm07 bash[20771]: audit 2026-03-09T21:19:27.312388+0000 mon.a (mon.0) 916 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:28.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:28 vm07 bash[20771]: audit 2026-03-09T21:19:27.312388+0000 mon.a (mon.0) 916 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:28.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:28 vm07 bash[20771]: cluster 2026-03-09T21:19:27.319935+0000 mon.a (mon.0) 917 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-09T21:19:28.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:28 vm07 bash[20771]: cluster 2026-03-09T21:19:27.319935+0000 mon.a (mon.0) 917 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-09T21:19:28.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:28 vm07 bash[20771]: cluster 2026-03-09T21:19:27.757499+0000 mgr.y (mgr.24416) 136 : cluster [DBG] pgmap v150: 196 pgs: 196 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T21:19:28.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:28 vm07 bash[20771]: cluster 2026-03-09T21:19:27.757499+0000 mgr.y (mgr.24416) 136 : cluster [DBG] pgmap v150: 196 pgs: 196 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T21:19:28.865 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:19:28 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:19:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:19:28.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:28 vm07 bash[28052]: audit 2026-03-09T21:19:27.312388+0000 mon.a (mon.0) 916 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:28.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:28 vm07 bash[28052]: audit 2026-03-09T21:19:27.312388+0000 mon.a (mon.0) 916 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:28.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:28 vm07 bash[28052]: cluster 2026-03-09T21:19:27.319935+0000 mon.a (mon.0) 917 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-09T21:19:28.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:28 vm07 bash[28052]: cluster 2026-03-09T21:19:27.319935+0000 mon.a (mon.0) 917 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-09T21:19:28.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:28 vm07 bash[28052]: cluster 2026-03-09T21:19:27.757499+0000 mgr.y (mgr.24416) 136 : cluster [DBG] pgmap v150: 196 pgs: 196 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T21:19:28.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:28 vm07 bash[28052]: cluster 2026-03-09T21:19:27.757499+0000 mgr.y (mgr.24416) 136 : cluster [DBG] pgmap v150: 196 pgs: 196 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T21:19:29.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:29 vm10 bash[23387]: cluster 2026-03-09T21:19:28.392248+0000 mon.a (mon.0) 918 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-09T21:19:29.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:29 vm10 bash[23387]: cluster 2026-03-09T21:19:28.392248+0000 mon.a (mon.0) 918 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-09T21:19:29.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:29 vm07 bash[20771]: cluster 2026-03-09T21:19:28.392248+0000 mon.a (mon.0) 918 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-09T21:19:29.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:29 vm07 bash[20771]: cluster 2026-03-09T21:19:28.392248+0000 mon.a (mon.0) 918 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-09T21:19:29.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:29 vm07 bash[28052]: cluster 2026-03-09T21:19:28.392248+0000 mon.a (mon.0) 918 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-09T21:19:29.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:29 vm07 bash[28052]: cluster 2026-03-09T21:19:28.392248+0000 mon.a (mon.0) 918 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-09T21:19:30.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:30 vm10 bash[23387]: cluster 2026-03-09T21:19:29.431566+0000 mon.a (mon.0) 919 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-09T21:19:30.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:30 vm10 bash[23387]: cluster 2026-03-09T21:19:29.431566+0000 mon.a (mon.0) 919 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-09T21:19:30.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:30 vm10 bash[23387]: cluster 2026-03-09T21:19:29.757915+0000 mgr.y (mgr.24416) 137 : cluster [DBG] pgmap v153: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:30.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:30 vm10 bash[23387]: cluster 2026-03-09T21:19:29.757915+0000 mgr.y (mgr.24416) 137 : cluster [DBG] pgmap v153: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:30.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:30 vm07 bash[20771]: cluster 2026-03-09T21:19:29.431566+0000 mon.a (mon.0) 919 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-09T21:19:30.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:30 vm07 bash[20771]: cluster 2026-03-09T21:19:29.431566+0000 mon.a (mon.0) 919 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-09T21:19:30.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:30 vm07 bash[20771]: cluster 2026-03-09T21:19:29.757915+0000 mgr.y (mgr.24416) 137 : cluster [DBG] pgmap v153: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:30.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:30 vm07 bash[20771]: cluster 2026-03-09T21:19:29.757915+0000 mgr.y (mgr.24416) 137 : cluster [DBG] pgmap v153: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:30.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:30 vm07 bash[28052]: cluster 2026-03-09T21:19:29.431566+0000 mon.a (mon.0) 919 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-09T21:19:30.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:30 vm07 bash[28052]: cluster 2026-03-09T21:19:29.431566+0000 mon.a (mon.0) 919 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-09T21:19:30.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:30 vm07 bash[28052]: cluster 2026-03-09T21:19:29.757915+0000 mgr.y (mgr.24416) 137 : cluster [DBG] pgmap v153: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:30.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:30 vm07 bash[28052]: cluster 2026-03-09T21:19:29.757915+0000 mgr.y (mgr.24416) 137 : cluster [DBG] pgmap v153: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:31.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:31 vm10 bash[23387]: cluster 2026-03-09T21:19:30.447977+0000 mon.a (mon.0) 920 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-09T21:19:31.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:31 vm10 bash[23387]: cluster 2026-03-09T21:19:30.447977+0000 mon.a (mon.0) 920 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-09T21:19:31.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:31 vm10 bash[23387]: audit 2026-03-09T21:19:30.485196+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.107:0/3100560116' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:31.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:31 vm10 bash[23387]: audit 2026-03-09T21:19:30.485196+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.107:0/3100560116' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:31.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:31 vm10 bash[23387]: audit 2026-03-09T21:19:30.485579+0000 mon.a (mon.0) 921 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:31.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:31 vm10 bash[23387]: audit 2026-03-09T21:19:30.485579+0000 mon.a (mon.0) 921 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:31.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:31 vm07 bash[20771]: cluster 2026-03-09T21:19:30.447977+0000 mon.a (mon.0) 920 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-09T21:19:31.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:31 vm07 bash[20771]: cluster 2026-03-09T21:19:30.447977+0000 mon.a (mon.0) 920 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-09T21:19:31.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:31 vm07 bash[20771]: audit 2026-03-09T21:19:30.485196+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.107:0/3100560116' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:31.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:31 vm07 bash[20771]: audit 2026-03-09T21:19:30.485196+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.107:0/3100560116' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:31.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:31 vm07 bash[20771]: audit 2026-03-09T21:19:30.485579+0000 mon.a (mon.0) 921 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:31.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:31 vm07 bash[20771]: audit 2026-03-09T21:19:30.485579+0000 mon.a (mon.0) 921 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:31.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:31 vm07 bash[28052]: cluster 2026-03-09T21:19:30.447977+0000 mon.a (mon.0) 920 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-09T21:19:31.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:31 vm07 bash[28052]: cluster 2026-03-09T21:19:30.447977+0000 mon.a (mon.0) 920 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-09T21:19:31.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:31 vm07 bash[28052]: audit 2026-03-09T21:19:30.485196+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.107:0/3100560116' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:31.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:31 vm07 bash[28052]: audit 2026-03-09T21:19:30.485196+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.107:0/3100560116' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:31.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:31 vm07 bash[28052]: audit 2026-03-09T21:19:30.485579+0000 mon.a (mon.0) 921 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:31.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:31 vm07 bash[28052]: audit 2026-03-09T21:19:30.485579+0000 mon.a (mon.0) 921 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:32.471 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_read_crc PASSED [ 34%] 2026-03-09T21:19:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:32 vm07 bash[20771]: audit 2026-03-09T21:19:31.429681+0000 mon.a (mon.0) 922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:32 vm07 bash[20771]: audit 2026-03-09T21:19:31.429681+0000 mon.a (mon.0) 922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:32 vm07 bash[20771]: cluster 2026-03-09T21:19:31.436617+0000 mon.a (mon.0) 923 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-09T21:19:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:32 vm07 bash[20771]: cluster 2026-03-09T21:19:31.436617+0000 mon.a (mon.0) 923 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-09T21:19:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:32 vm07 bash[20771]: cluster 2026-03-09T21:19:31.758214+0000 mgr.y (mgr.24416) 138 : cluster [DBG] pgmap v156: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:32 vm07 bash[20771]: cluster 2026-03-09T21:19:31.758214+0000 mgr.y (mgr.24416) 138 : cluster [DBG] pgmap v156: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:32.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:32 vm07 bash[28052]: audit 2026-03-09T21:19:31.429681+0000 mon.a (mon.0) 922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:32.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:32 vm07 bash[28052]: audit 2026-03-09T21:19:31.429681+0000 mon.a (mon.0) 922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:32.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:32 vm07 bash[28052]: cluster 2026-03-09T21:19:31.436617+0000 mon.a (mon.0) 923 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-09T21:19:32.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:32 vm07 bash[28052]: cluster 2026-03-09T21:19:31.436617+0000 mon.a (mon.0) 923 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-09T21:19:32.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:32 vm07 bash[28052]: cluster 2026-03-09T21:19:31.758214+0000 mgr.y (mgr.24416) 138 : cluster [DBG] pgmap v156: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:32.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:32 vm07 bash[28052]: cluster 2026-03-09T21:19:31.758214+0000 mgr.y (mgr.24416) 138 : cluster [DBG] pgmap v156: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:32 vm10 bash[23387]: audit 2026-03-09T21:19:31.429681+0000 mon.a (mon.0) 922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:32 vm10 bash[23387]: audit 2026-03-09T21:19:31.429681+0000 mon.a (mon.0) 922 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:32 vm10 bash[23387]: cluster 2026-03-09T21:19:31.436617+0000 mon.a (mon.0) 923 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-09T21:19:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:32 vm10 bash[23387]: cluster 2026-03-09T21:19:31.436617+0000 mon.a (mon.0) 923 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-09T21:19:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:32 vm10 bash[23387]: cluster 2026-03-09T21:19:31.758214+0000 mgr.y (mgr.24416) 138 : cluster [DBG] pgmap v156: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:32 vm10 bash[23387]: cluster 2026-03-09T21:19:31.758214+0000 mgr.y (mgr.24416) 138 : cluster [DBG] pgmap v156: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:33.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:33 vm07 bash[20771]: cluster 2026-03-09T21:19:32.470248+0000 mon.a (mon.0) 924 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-09T21:19:33.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:33 vm07 bash[20771]: cluster 2026-03-09T21:19:32.470248+0000 mon.a (mon.0) 924 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-09T21:19:33.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:33 vm07 bash[28052]: cluster 2026-03-09T21:19:32.470248+0000 mon.a (mon.0) 924 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-09T21:19:33.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:33 vm07 bash[28052]: cluster 2026-03-09T21:19:32.470248+0000 mon.a (mon.0) 924 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-09T21:19:33.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:33 vm10 bash[23387]: cluster 2026-03-09T21:19:32.470248+0000 mon.a (mon.0) 924 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-09T21:19:33.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:33 vm10 bash[23387]: cluster 2026-03-09T21:19:32.470248+0000 mon.a (mon.0) 924 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-09T21:19:34.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:34 vm07 bash[20771]: cluster 2026-03-09T21:19:33.531205+0000 mon.a (mon.0) 925 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-09T21:19:34.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:34 vm07 bash[20771]: cluster 2026-03-09T21:19:33.531205+0000 mon.a (mon.0) 925 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-09T21:19:34.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:34 vm07 bash[20771]: cluster 2026-03-09T21:19:33.758505+0000 mgr.y (mgr.24416) 139 : cluster [DBG] pgmap v159: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:34.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:34 vm07 bash[20771]: cluster 2026-03-09T21:19:33.758505+0000 mgr.y (mgr.24416) 139 : cluster [DBG] pgmap v159: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:34.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:34 vm07 bash[20771]: cluster 2026-03-09T21:19:34.533488+0000 mon.a (mon.0) 926 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-09T21:19:34.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:34 vm07 bash[20771]: cluster 2026-03-09T21:19:34.533488+0000 mon.a (mon.0) 926 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-09T21:19:34.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:34 vm07 bash[28052]: cluster 2026-03-09T21:19:33.531205+0000 mon.a (mon.0) 925 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-09T21:19:34.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:34 vm07 bash[28052]: cluster 2026-03-09T21:19:33.531205+0000 mon.a (mon.0) 925 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-09T21:19:34.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:34 vm07 bash[28052]: cluster 2026-03-09T21:19:33.758505+0000 mgr.y (mgr.24416) 139 : cluster [DBG] pgmap v159: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:34.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:34 vm07 bash[28052]: cluster 2026-03-09T21:19:33.758505+0000 mgr.y (mgr.24416) 139 : cluster [DBG] pgmap v159: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:34.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:34 vm07 bash[28052]: cluster 2026-03-09T21:19:34.533488+0000 mon.a (mon.0) 926 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-09T21:19:34.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:34 vm07 bash[28052]: cluster 2026-03-09T21:19:34.533488+0000 mon.a (mon.0) 926 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-09T21:19:34.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:34 vm10 bash[23387]: cluster 2026-03-09T21:19:33.531205+0000 mon.a (mon.0) 925 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-09T21:19:34.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:34 vm10 bash[23387]: cluster 2026-03-09T21:19:33.531205+0000 mon.a (mon.0) 925 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-09T21:19:34.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:34 vm10 bash[23387]: cluster 2026-03-09T21:19:33.758505+0000 mgr.y (mgr.24416) 139 : cluster [DBG] pgmap v159: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:34.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:34 vm10 bash[23387]: cluster 2026-03-09T21:19:33.758505+0000 mgr.y (mgr.24416) 139 : cluster [DBG] pgmap v159: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:34.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:34 vm10 bash[23387]: cluster 2026-03-09T21:19:34.533488+0000 mon.a (mon.0) 926 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-09T21:19:34.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:34 vm10 bash[23387]: cluster 2026-03-09T21:19:34.533488+0000 mon.a (mon.0) 926 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-09T21:19:35.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:35 vm07 bash[20771]: audit 2026-03-09T21:19:34.588237+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.107:0/1215654637' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:35.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:35 vm07 bash[20771]: audit 2026-03-09T21:19:34.588237+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.107:0/1215654637' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:35.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:35 vm07 bash[20771]: audit 2026-03-09T21:19:34.588595+0000 mon.a (mon.0) 927 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:35.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:35 vm07 bash[20771]: audit 2026-03-09T21:19:34.588595+0000 mon.a (mon.0) 927 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:35.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:35 vm07 bash[28052]: audit 2026-03-09T21:19:34.588237+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.107:0/1215654637' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:35.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:35 vm07 bash[28052]: audit 2026-03-09T21:19:34.588237+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.107:0/1215654637' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:35.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:35 vm07 bash[28052]: audit 2026-03-09T21:19:34.588595+0000 mon.a (mon.0) 927 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:35.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:35 vm07 bash[28052]: audit 2026-03-09T21:19:34.588595+0000 mon.a (mon.0) 927 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:35.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:35 vm10 bash[23387]: audit 2026-03-09T21:19:34.588237+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.107:0/1215654637' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:35.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:35 vm10 bash[23387]: audit 2026-03-09T21:19:34.588237+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.107:0/1215654637' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:35.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:35 vm10 bash[23387]: audit 2026-03-09T21:19:34.588595+0000 mon.a (mon.0) 927 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:35.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:35 vm10 bash[23387]: audit 2026-03-09T21:19:34.588595+0000 mon.a (mon.0) 927 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:36.602 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_list_objects PASSED [ 35%] 2026-03-09T21:19:36.628 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:19:36 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:19:36.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:36 vm10 bash[23387]: audit 2026-03-09T21:19:35.586694+0000 mon.a (mon.0) 928 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:36.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:36 vm10 bash[23387]: audit 2026-03-09T21:19:35.586694+0000 mon.a (mon.0) 928 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:36.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:36 vm10 bash[23387]: cluster 2026-03-09T21:19:35.590405+0000 mon.a (mon.0) 929 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-09T21:19:36.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:36 vm10 bash[23387]: cluster 2026-03-09T21:19:35.590405+0000 mon.a (mon.0) 929 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-09T21:19:36.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:36 vm10 bash[23387]: cluster 2026-03-09T21:19:35.758807+0000 mgr.y (mgr.24416) 140 : cluster [DBG] pgmap v162: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:36.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:36 vm10 bash[23387]: cluster 2026-03-09T21:19:35.758807+0000 mgr.y (mgr.24416) 140 : cluster [DBG] pgmap v162: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:37.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:36 vm07 bash[20771]: audit 2026-03-09T21:19:35.586694+0000 mon.a (mon.0) 928 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:37.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:36 vm07 bash[20771]: audit 2026-03-09T21:19:35.586694+0000 mon.a (mon.0) 928 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:37.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:36 vm07 bash[20771]: cluster 2026-03-09T21:19:35.590405+0000 mon.a (mon.0) 929 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-09T21:19:37.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:36 vm07 bash[20771]: cluster 2026-03-09T21:19:35.590405+0000 mon.a (mon.0) 929 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-09T21:19:37.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:36 vm07 bash[20771]: cluster 2026-03-09T21:19:35.758807+0000 mgr.y (mgr.24416) 140 : cluster [DBG] pgmap v162: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:37.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:36 vm07 bash[20771]: cluster 2026-03-09T21:19:35.758807+0000 mgr.y (mgr.24416) 140 : cluster [DBG] pgmap v162: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:37.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:36 vm07 bash[28052]: audit 2026-03-09T21:19:35.586694+0000 mon.a (mon.0) 928 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:37.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:36 vm07 bash[28052]: audit 2026-03-09T21:19:35.586694+0000 mon.a (mon.0) 928 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:37.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:36 vm07 bash[28052]: cluster 2026-03-09T21:19:35.590405+0000 mon.a (mon.0) 929 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-09T21:19:37.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:36 vm07 bash[28052]: cluster 2026-03-09T21:19:35.590405+0000 mon.a (mon.0) 929 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-09T21:19:37.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:36 vm07 bash[28052]: cluster 2026-03-09T21:19:35.758807+0000 mgr.y (mgr.24416) 140 : cluster [DBG] pgmap v162: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:37.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:36 vm07 bash[28052]: cluster 2026-03-09T21:19:35.758807+0000 mgr.y (mgr.24416) 140 : cluster [DBG] pgmap v162: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:37.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:37 vm10 bash[23387]: audit 2026-03-09T21:19:36.310281+0000 mgr.y (mgr.24416) 141 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:37.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:37 vm10 bash[23387]: audit 2026-03-09T21:19:36.310281+0000 mgr.y (mgr.24416) 141 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:37.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:37 vm10 bash[23387]: cluster 2026-03-09T21:19:36.599838+0000 mon.a (mon.0) 930 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-09T21:19:37.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:37 vm10 bash[23387]: cluster 2026-03-09T21:19:36.599838+0000 mon.a (mon.0) 930 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-09T21:19:37.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:37 vm10 bash[23387]: cluster 2026-03-09T21:19:37.628193+0000 mon.a (mon.0) 931 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-09T21:19:37.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:37 vm10 bash[23387]: cluster 2026-03-09T21:19:37.628193+0000 mon.a (mon.0) 931 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-09T21:19:38.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:37 vm07 bash[20771]: audit 2026-03-09T21:19:36.310281+0000 mgr.y (mgr.24416) 141 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:38.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:37 vm07 bash[20771]: audit 2026-03-09T21:19:36.310281+0000 mgr.y (mgr.24416) 141 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:38.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:37 vm07 bash[20771]: cluster 2026-03-09T21:19:36.599838+0000 mon.a (mon.0) 930 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-09T21:19:38.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:37 vm07 bash[20771]: cluster 2026-03-09T21:19:36.599838+0000 mon.a (mon.0) 930 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-09T21:19:38.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:37 vm07 bash[20771]: cluster 2026-03-09T21:19:37.628193+0000 mon.a (mon.0) 931 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-09T21:19:38.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:37 vm07 bash[20771]: cluster 2026-03-09T21:19:37.628193+0000 mon.a (mon.0) 931 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-09T21:19:38.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:37 vm07 bash[28052]: audit 2026-03-09T21:19:36.310281+0000 mgr.y (mgr.24416) 141 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:38.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:37 vm07 bash[28052]: audit 2026-03-09T21:19:36.310281+0000 mgr.y (mgr.24416) 141 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:38.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:37 vm07 bash[28052]: cluster 2026-03-09T21:19:36.599838+0000 mon.a (mon.0) 930 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-09T21:19:38.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:37 vm07 bash[28052]: cluster 2026-03-09T21:19:36.599838+0000 mon.a (mon.0) 930 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-09T21:19:38.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:37 vm07 bash[28052]: cluster 2026-03-09T21:19:37.628193+0000 mon.a (mon.0) 931 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-09T21:19:38.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:37 vm07 bash[28052]: cluster 2026-03-09T21:19:37.628193+0000 mon.a (mon.0) 931 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-09T21:19:38.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:38 vm10 bash[23387]: cluster 2026-03-09T21:19:37.759241+0000 mgr.y (mgr.24416) 142 : cluster [DBG] pgmap v165: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:38.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:38 vm10 bash[23387]: cluster 2026-03-09T21:19:37.759241+0000 mgr.y (mgr.24416) 142 : cluster [DBG] pgmap v165: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:38.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:38 vm10 bash[23387]: cluster 2026-03-09T21:19:38.630490+0000 mon.a (mon.0) 932 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-09T21:19:38.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:38 vm10 bash[23387]: cluster 2026-03-09T21:19:38.630490+0000 mon.a (mon.0) 932 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-09T21:19:39.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:38 vm07 bash[20771]: cluster 2026-03-09T21:19:37.759241+0000 mgr.y (mgr.24416) 142 : cluster [DBG] pgmap v165: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:39.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:38 vm07 bash[20771]: cluster 2026-03-09T21:19:37.759241+0000 mgr.y (mgr.24416) 142 : cluster [DBG] pgmap v165: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:39.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:38 vm07 bash[20771]: cluster 2026-03-09T21:19:38.630490+0000 mon.a (mon.0) 932 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-09T21:19:39.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:38 vm07 bash[20771]: cluster 2026-03-09T21:19:38.630490+0000 mon.a (mon.0) 932 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-09T21:19:39.115 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:19:38 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:19:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:19:39.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:38 vm07 bash[28052]: cluster 2026-03-09T21:19:37.759241+0000 mgr.y (mgr.24416) 142 : cluster [DBG] pgmap v165: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:39.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:38 vm07 bash[28052]: cluster 2026-03-09T21:19:37.759241+0000 mgr.y (mgr.24416) 142 : cluster [DBG] pgmap v165: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:39.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:38 vm07 bash[28052]: cluster 2026-03-09T21:19:38.630490+0000 mon.a (mon.0) 932 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-09T21:19:39.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:38 vm07 bash[28052]: cluster 2026-03-09T21:19:38.630490+0000 mon.a (mon.0) 932 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-09T21:19:39.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:39 vm10 bash[23387]: audit 2026-03-09T21:19:38.693224+0000 mon.c (mon.2) 78 : audit [INF] from='client.? 192.168.123.107:0/3808977356' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:39.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:39 vm10 bash[23387]: audit 2026-03-09T21:19:38.693224+0000 mon.c (mon.2) 78 : audit [INF] from='client.? 192.168.123.107:0/3808977356' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:39.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:39 vm10 bash[23387]: audit 2026-03-09T21:19:38.693492+0000 mon.a (mon.0) 933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:39.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:39 vm10 bash[23387]: audit 2026-03-09T21:19:38.693492+0000 mon.a (mon.0) 933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:39.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:39 vm10 bash[23387]: audit 2026-03-09T21:19:39.628885+0000 mon.a (mon.0) 934 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:39.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:39 vm10 bash[23387]: audit 2026-03-09T21:19:39.628885+0000 mon.a (mon.0) 934 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:39.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:39 vm10 bash[23387]: cluster 2026-03-09T21:19:39.632816+0000 mon.a (mon.0) 935 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-09T21:19:39.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:39 vm10 bash[23387]: cluster 2026-03-09T21:19:39.632816+0000 mon.a (mon.0) 935 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-09T21:19:40.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:39 vm07 bash[20771]: audit 2026-03-09T21:19:38.693224+0000 mon.c (mon.2) 78 : audit [INF] from='client.? 192.168.123.107:0/3808977356' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:40.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:39 vm07 bash[20771]: audit 2026-03-09T21:19:38.693224+0000 mon.c (mon.2) 78 : audit [INF] from='client.? 192.168.123.107:0/3808977356' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:40.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:39 vm07 bash[20771]: audit 2026-03-09T21:19:38.693492+0000 mon.a (mon.0) 933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:40.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:39 vm07 bash[20771]: audit 2026-03-09T21:19:38.693492+0000 mon.a (mon.0) 933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:40.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:39 vm07 bash[20771]: audit 2026-03-09T21:19:39.628885+0000 mon.a (mon.0) 934 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:40.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:39 vm07 bash[20771]: audit 2026-03-09T21:19:39.628885+0000 mon.a (mon.0) 934 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:40.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:39 vm07 bash[20771]: cluster 2026-03-09T21:19:39.632816+0000 mon.a (mon.0) 935 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-09T21:19:40.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:39 vm07 bash[20771]: cluster 2026-03-09T21:19:39.632816+0000 mon.a (mon.0) 935 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-09T21:19:40.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:39 vm07 bash[28052]: audit 2026-03-09T21:19:38.693224+0000 mon.c (mon.2) 78 : audit [INF] from='client.? 192.168.123.107:0/3808977356' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:40.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:39 vm07 bash[28052]: audit 2026-03-09T21:19:38.693224+0000 mon.c (mon.2) 78 : audit [INF] from='client.? 192.168.123.107:0/3808977356' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:40.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:39 vm07 bash[28052]: audit 2026-03-09T21:19:38.693492+0000 mon.a (mon.0) 933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:40.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:39 vm07 bash[28052]: audit 2026-03-09T21:19:38.693492+0000 mon.a (mon.0) 933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:40.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:39 vm07 bash[28052]: audit 2026-03-09T21:19:39.628885+0000 mon.a (mon.0) 934 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:40.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:39 vm07 bash[28052]: audit 2026-03-09T21:19:39.628885+0000 mon.a (mon.0) 934 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:40.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:39 vm07 bash[28052]: cluster 2026-03-09T21:19:39.632816+0000 mon.a (mon.0) 935 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-09T21:19:40.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:39 vm07 bash[28052]: cluster 2026-03-09T21:19:39.632816+0000 mon.a (mon.0) 935 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-09T21:19:40.656 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_list_ns_objects PASSED [ 36%] 2026-03-09T21:19:40.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:40 vm10 bash[23387]: cluster 2026-03-09T21:19:39.759658+0000 mgr.y (mgr.24416) 143 : cluster [DBG] pgmap v168: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:40.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:40 vm10 bash[23387]: cluster 2026-03-09T21:19:39.759658+0000 mgr.y (mgr.24416) 143 : cluster [DBG] pgmap v168: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:40.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:40 vm10 bash[23387]: cluster 2026-03-09T21:19:40.650974+0000 mon.a (mon.0) 936 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-09T21:19:40.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:40 vm10 bash[23387]: cluster 2026-03-09T21:19:40.650974+0000 mon.a (mon.0) 936 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-09T21:19:41.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:40 vm07 bash[20771]: cluster 2026-03-09T21:19:39.759658+0000 mgr.y (mgr.24416) 143 : cluster [DBG] pgmap v168: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:41.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:40 vm07 bash[20771]: cluster 2026-03-09T21:19:39.759658+0000 mgr.y (mgr.24416) 143 : cluster [DBG] pgmap v168: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:41.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:40 vm07 bash[20771]: cluster 2026-03-09T21:19:40.650974+0000 mon.a (mon.0) 936 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-09T21:19:41.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:40 vm07 bash[20771]: cluster 2026-03-09T21:19:40.650974+0000 mon.a (mon.0) 936 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-09T21:19:41.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:40 vm07 bash[28052]: cluster 2026-03-09T21:19:39.759658+0000 mgr.y (mgr.24416) 143 : cluster [DBG] pgmap v168: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:41.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:40 vm07 bash[28052]: cluster 2026-03-09T21:19:39.759658+0000 mgr.y (mgr.24416) 143 : cluster [DBG] pgmap v168: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:41.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:40 vm07 bash[28052]: cluster 2026-03-09T21:19:40.650974+0000 mon.a (mon.0) 936 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-09T21:19:41.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:40 vm07 bash[28052]: cluster 2026-03-09T21:19:40.650974+0000 mon.a (mon.0) 936 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-09T21:19:43.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:42 vm07 bash[20771]: cluster 2026-03-09T21:19:41.730276+0000 mon.a (mon.0) 937 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-09T21:19:43.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:42 vm07 bash[20771]: cluster 2026-03-09T21:19:41.730276+0000 mon.a (mon.0) 937 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-09T21:19:43.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:42 vm07 bash[20771]: cluster 2026-03-09T21:19:41.759951+0000 mgr.y (mgr.24416) 144 : cluster [DBG] pgmap v171: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:43.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:42 vm07 bash[20771]: cluster 2026-03-09T21:19:41.759951+0000 mgr.y (mgr.24416) 144 : cluster [DBG] pgmap v171: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:43.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:42 vm07 bash[20771]: audit 2026-03-09T21:19:41.968004+0000 mon.c (mon.2) 79 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:19:43.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:42 vm07 bash[20771]: audit 2026-03-09T21:19:41.968004+0000 mon.c (mon.2) 79 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:19:43.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:42 vm07 bash[28052]: cluster 2026-03-09T21:19:41.730276+0000 mon.a (mon.0) 937 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-09T21:19:43.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:42 vm07 bash[28052]: cluster 2026-03-09T21:19:41.730276+0000 mon.a (mon.0) 937 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-09T21:19:43.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:42 vm07 bash[28052]: cluster 2026-03-09T21:19:41.759951+0000 mgr.y (mgr.24416) 144 : cluster [DBG] pgmap v171: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:43.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:42 vm07 bash[28052]: cluster 2026-03-09T21:19:41.759951+0000 mgr.y (mgr.24416) 144 : cluster [DBG] pgmap v171: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:43.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:42 vm07 bash[28052]: audit 2026-03-09T21:19:41.968004+0000 mon.c (mon.2) 79 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:19:43.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:42 vm07 bash[28052]: audit 2026-03-09T21:19:41.968004+0000 mon.c (mon.2) 79 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:19:43.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:42 vm10 bash[23387]: cluster 2026-03-09T21:19:41.730276+0000 mon.a (mon.0) 937 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-09T21:19:43.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:42 vm10 bash[23387]: cluster 2026-03-09T21:19:41.730276+0000 mon.a (mon.0) 937 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-09T21:19:43.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:42 vm10 bash[23387]: cluster 2026-03-09T21:19:41.759951+0000 mgr.y (mgr.24416) 144 : cluster [DBG] pgmap v171: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:43.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:42 vm10 bash[23387]: cluster 2026-03-09T21:19:41.759951+0000 mgr.y (mgr.24416) 144 : cluster [DBG] pgmap v171: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:43.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:42 vm10 bash[23387]: audit 2026-03-09T21:19:41.968004+0000 mon.c (mon.2) 79 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:19:43.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:42 vm10 bash[23387]: audit 2026-03-09T21:19:41.968004+0000 mon.c (mon.2) 79 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:19:44.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:43 vm07 bash[20771]: cluster 2026-03-09T21:19:42.743454+0000 mon.a (mon.0) 938 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-09T21:19:44.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:43 vm07 bash[20771]: cluster 2026-03-09T21:19:42.743454+0000 mon.a (mon.0) 938 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-09T21:19:44.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:43 vm07 bash[20771]: audit 2026-03-09T21:19:42.796684+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.107:0/2050854112' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:44.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:43 vm07 bash[20771]: audit 2026-03-09T21:19:42.796684+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.107:0/2050854112' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:44.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:43 vm07 bash[20771]: audit 2026-03-09T21:19:42.797049+0000 mon.a (mon.0) 939 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:44.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:43 vm07 bash[20771]: audit 2026-03-09T21:19:42.797049+0000 mon.a (mon.0) 939 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:44.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:43 vm07 bash[20771]: audit 2026-03-09T21:19:43.727230+0000 mon.a (mon.0) 940 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:44.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:43 vm07 bash[20771]: audit 2026-03-09T21:19:43.727230+0000 mon.a (mon.0) 940 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:44.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:43 vm07 bash[20771]: cluster 2026-03-09T21:19:43.735895+0000 mon.a (mon.0) 941 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-09T21:19:44.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:43 vm07 bash[20771]: cluster 2026-03-09T21:19:43.735895+0000 mon.a (mon.0) 941 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-09T21:19:44.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:43 vm07 bash[28052]: cluster 2026-03-09T21:19:42.743454+0000 mon.a (mon.0) 938 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-09T21:19:44.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:43 vm07 bash[28052]: cluster 2026-03-09T21:19:42.743454+0000 mon.a (mon.0) 938 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-09T21:19:44.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:43 vm07 bash[28052]: audit 2026-03-09T21:19:42.796684+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.107:0/2050854112' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:44.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:43 vm07 bash[28052]: audit 2026-03-09T21:19:42.796684+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.107:0/2050854112' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:44.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:43 vm07 bash[28052]: audit 2026-03-09T21:19:42.797049+0000 mon.a (mon.0) 939 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:44.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:43 vm07 bash[28052]: audit 2026-03-09T21:19:42.797049+0000 mon.a (mon.0) 939 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:44.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:43 vm07 bash[28052]: audit 2026-03-09T21:19:43.727230+0000 mon.a (mon.0) 940 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:44.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:43 vm07 bash[28052]: audit 2026-03-09T21:19:43.727230+0000 mon.a (mon.0) 940 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:44.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:43 vm07 bash[28052]: cluster 2026-03-09T21:19:43.735895+0000 mon.a (mon.0) 941 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-09T21:19:44.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:43 vm07 bash[28052]: cluster 2026-03-09T21:19:43.735895+0000 mon.a (mon.0) 941 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-09T21:19:44.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:43 vm10 bash[23387]: cluster 2026-03-09T21:19:42.743454+0000 mon.a (mon.0) 938 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-09T21:19:44.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:43 vm10 bash[23387]: cluster 2026-03-09T21:19:42.743454+0000 mon.a (mon.0) 938 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-09T21:19:44.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:43 vm10 bash[23387]: audit 2026-03-09T21:19:42.796684+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.107:0/2050854112' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:44.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:43 vm10 bash[23387]: audit 2026-03-09T21:19:42.796684+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.107:0/2050854112' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:44.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:43 vm10 bash[23387]: audit 2026-03-09T21:19:42.797049+0000 mon.a (mon.0) 939 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:44.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:43 vm10 bash[23387]: audit 2026-03-09T21:19:42.797049+0000 mon.a (mon.0) 939 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:44.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:43 vm10 bash[23387]: audit 2026-03-09T21:19:43.727230+0000 mon.a (mon.0) 940 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:44.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:43 vm10 bash[23387]: audit 2026-03-09T21:19:43.727230+0000 mon.a (mon.0) 940 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:44.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:43 vm10 bash[23387]: cluster 2026-03-09T21:19:43.735895+0000 mon.a (mon.0) 941 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-09T21:19:44.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:43 vm10 bash[23387]: cluster 2026-03-09T21:19:43.735895+0000 mon.a (mon.0) 941 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-09T21:19:44.749 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_xattrs PASSED [ 37%] 2026-03-09T21:19:45.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:44 vm07 bash[20771]: cluster 2026-03-09T21:19:43.760264+0000 mgr.y (mgr.24416) 145 : cluster [DBG] pgmap v174: 196 pgs: 6 creating+activating, 25 creating+peering, 165 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:45.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:44 vm07 bash[20771]: cluster 2026-03-09T21:19:43.760264+0000 mgr.y (mgr.24416) 145 : cluster [DBG] pgmap v174: 196 pgs: 6 creating+activating, 25 creating+peering, 165 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:45.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:44 vm07 bash[20771]: cluster 2026-03-09T21:19:44.746334+0000 mon.a (mon.0) 942 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-09T21:19:45.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:44 vm07 bash[20771]: cluster 2026-03-09T21:19:44.746334+0000 mon.a (mon.0) 942 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-09T21:19:45.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:44 vm07 bash[28052]: cluster 2026-03-09T21:19:43.760264+0000 mgr.y (mgr.24416) 145 : cluster [DBG] pgmap v174: 196 pgs: 6 creating+activating, 25 creating+peering, 165 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:45.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:44 vm07 bash[28052]: cluster 2026-03-09T21:19:43.760264+0000 mgr.y (mgr.24416) 145 : cluster [DBG] pgmap v174: 196 pgs: 6 creating+activating, 25 creating+peering, 165 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:45.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:44 vm07 bash[28052]: cluster 2026-03-09T21:19:44.746334+0000 mon.a (mon.0) 942 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-09T21:19:45.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:44 vm07 bash[28052]: cluster 2026-03-09T21:19:44.746334+0000 mon.a (mon.0) 942 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-09T21:19:45.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:44 vm10 bash[23387]: cluster 2026-03-09T21:19:43.760264+0000 mgr.y (mgr.24416) 145 : cluster [DBG] pgmap v174: 196 pgs: 6 creating+activating, 25 creating+peering, 165 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:45.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:44 vm10 bash[23387]: cluster 2026-03-09T21:19:43.760264+0000 mgr.y (mgr.24416) 145 : cluster [DBG] pgmap v174: 196 pgs: 6 creating+activating, 25 creating+peering, 165 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:45.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:44 vm10 bash[23387]: cluster 2026-03-09T21:19:44.746334+0000 mon.a (mon.0) 942 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-09T21:19:45.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:44 vm10 bash[23387]: cluster 2026-03-09T21:19:44.746334+0000 mon.a (mon.0) 942 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-09T21:19:46.692 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:19:46 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:19:47.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:46 vm07 bash[28052]: cluster 2026-03-09T21:19:45.742107+0000 mon.a (mon.0) 943 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-09T21:19:47.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:46 vm07 bash[28052]: cluster 2026-03-09T21:19:45.742107+0000 mon.a (mon.0) 943 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-09T21:19:47.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:46 vm07 bash[28052]: cluster 2026-03-09T21:19:45.760588+0000 mgr.y (mgr.24416) 146 : cluster [DBG] pgmap v177: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:47.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:46 vm07 bash[28052]: cluster 2026-03-09T21:19:45.760588+0000 mgr.y (mgr.24416) 146 : cluster [DBG] pgmap v177: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:47.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:46 vm07 bash[20771]: cluster 2026-03-09T21:19:45.742107+0000 mon.a (mon.0) 943 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-09T21:19:47.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:46 vm07 bash[20771]: cluster 2026-03-09T21:19:45.742107+0000 mon.a (mon.0) 943 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-09T21:19:47.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:46 vm07 bash[20771]: cluster 2026-03-09T21:19:45.760588+0000 mgr.y (mgr.24416) 146 : cluster [DBG] pgmap v177: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:47.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:46 vm07 bash[20771]: cluster 2026-03-09T21:19:45.760588+0000 mgr.y (mgr.24416) 146 : cluster [DBG] pgmap v177: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:47.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:46 vm10 bash[23387]: cluster 2026-03-09T21:19:45.742107+0000 mon.a (mon.0) 943 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-09T21:19:47.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:46 vm10 bash[23387]: cluster 2026-03-09T21:19:45.742107+0000 mon.a (mon.0) 943 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-09T21:19:47.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:46 vm10 bash[23387]: cluster 2026-03-09T21:19:45.760588+0000 mgr.y (mgr.24416) 146 : cluster [DBG] pgmap v177: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:47.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:46 vm10 bash[23387]: cluster 2026-03-09T21:19:45.760588+0000 mgr.y (mgr.24416) 146 : cluster [DBG] pgmap v177: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:48.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:47 vm07 bash[20771]: audit 2026-03-09T21:19:46.318555+0000 mgr.y (mgr.24416) 147 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:48.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:47 vm07 bash[20771]: audit 2026-03-09T21:19:46.318555+0000 mgr.y (mgr.24416) 147 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:48.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:47 vm07 bash[20771]: cluster 2026-03-09T21:19:46.756647+0000 mon.a (mon.0) 944 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-09T21:19:48.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:47 vm07 bash[20771]: cluster 2026-03-09T21:19:46.756647+0000 mon.a (mon.0) 944 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-09T21:19:48.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:47 vm07 bash[20771]: audit 2026-03-09T21:19:46.816177+0000 mon.a (mon.0) 945 : audit [INF] from='client.? 192.168.123.107:0/4261167445' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:48.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:47 vm07 bash[20771]: audit 2026-03-09T21:19:46.816177+0000 mon.a (mon.0) 945 : audit [INF] from='client.? 192.168.123.107:0/4261167445' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:48.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:47 vm07 bash[28052]: audit 2026-03-09T21:19:46.318555+0000 mgr.y (mgr.24416) 147 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:48.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:47 vm07 bash[28052]: audit 2026-03-09T21:19:46.318555+0000 mgr.y (mgr.24416) 147 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:48.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:47 vm07 bash[28052]: cluster 2026-03-09T21:19:46.756647+0000 mon.a (mon.0) 944 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-09T21:19:48.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:47 vm07 bash[28052]: cluster 2026-03-09T21:19:46.756647+0000 mon.a (mon.0) 944 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-09T21:19:48.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:47 vm07 bash[28052]: audit 2026-03-09T21:19:46.816177+0000 mon.a (mon.0) 945 : audit [INF] from='client.? 192.168.123.107:0/4261167445' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:48.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:47 vm07 bash[28052]: audit 2026-03-09T21:19:46.816177+0000 mon.a (mon.0) 945 : audit [INF] from='client.? 192.168.123.107:0/4261167445' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:47 vm10 bash[23387]: audit 2026-03-09T21:19:46.318555+0000 mgr.y (mgr.24416) 147 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:47 vm10 bash[23387]: audit 2026-03-09T21:19:46.318555+0000 mgr.y (mgr.24416) 147 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:47 vm10 bash[23387]: cluster 2026-03-09T21:19:46.756647+0000 mon.a (mon.0) 944 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-09T21:19:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:47 vm10 bash[23387]: cluster 2026-03-09T21:19:46.756647+0000 mon.a (mon.0) 944 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-09T21:19:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:47 vm10 bash[23387]: audit 2026-03-09T21:19:46.816177+0000 mon.a (mon.0) 945 : audit [INF] from='client.? 192.168.123.107:0/4261167445' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:48.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:47 vm10 bash[23387]: audit 2026-03-09T21:19:46.816177+0000 mon.a (mon.0) 945 : audit [INF] from='client.? 192.168.123.107:0/4261167445' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:48.758 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_obj_xattrs PASSED [ 38%] 2026-03-09T21:19:49.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:48 vm07 bash[20771]: audit 2026-03-09T21:19:47.750227+0000 mon.a (mon.0) 946 : audit [INF] from='client.? 192.168.123.107:0/4261167445' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:49.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:48 vm07 bash[20771]: audit 2026-03-09T21:19:47.750227+0000 mon.a (mon.0) 946 : audit [INF] from='client.? 192.168.123.107:0/4261167445' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:49.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:48 vm07 bash[20771]: cluster 2026-03-09T21:19:47.757215+0000 mon.a (mon.0) 947 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-09T21:19:49.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:48 vm07 bash[20771]: cluster 2026-03-09T21:19:47.757215+0000 mon.a (mon.0) 947 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-09T21:19:49.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:48 vm07 bash[20771]: cluster 2026-03-09T21:19:47.760892+0000 mgr.y (mgr.24416) 148 : cluster [DBG] pgmap v180: 196 pgs: 6 unknown, 190 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T21:19:49.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:48 vm07 bash[20771]: cluster 2026-03-09T21:19:47.760892+0000 mgr.y (mgr.24416) 148 : cluster [DBG] pgmap v180: 196 pgs: 6 unknown, 190 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T21:19:49.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:48 vm07 bash[20771]: cluster 2026-03-09T21:19:48.757034+0000 mon.a (mon.0) 948 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-09T21:19:49.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:48 vm07 bash[20771]: cluster 2026-03-09T21:19:48.757034+0000 mon.a (mon.0) 948 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-09T21:19:49.115 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:19:48 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:19:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:19:49.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:48 vm07 bash[28052]: audit 2026-03-09T21:19:47.750227+0000 mon.a (mon.0) 946 : audit [INF] from='client.? 192.168.123.107:0/4261167445' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:49.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:48 vm07 bash[28052]: audit 2026-03-09T21:19:47.750227+0000 mon.a (mon.0) 946 : audit [INF] from='client.? 192.168.123.107:0/4261167445' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:49.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:48 vm07 bash[28052]: cluster 2026-03-09T21:19:47.757215+0000 mon.a (mon.0) 947 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-09T21:19:49.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:48 vm07 bash[28052]: cluster 2026-03-09T21:19:47.757215+0000 mon.a (mon.0) 947 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-09T21:19:49.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:48 vm07 bash[28052]: cluster 2026-03-09T21:19:47.760892+0000 mgr.y (mgr.24416) 148 : cluster [DBG] pgmap v180: 196 pgs: 6 unknown, 190 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T21:19:49.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:48 vm07 bash[28052]: cluster 2026-03-09T21:19:47.760892+0000 mgr.y (mgr.24416) 148 : cluster [DBG] pgmap v180: 196 pgs: 6 unknown, 190 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T21:19:49.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:48 vm07 bash[28052]: cluster 2026-03-09T21:19:48.757034+0000 mon.a (mon.0) 948 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-09T21:19:49.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:48 vm07 bash[28052]: cluster 2026-03-09T21:19:48.757034+0000 mon.a (mon.0) 948 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-09T21:19:49.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:48 vm10 bash[23387]: audit 2026-03-09T21:19:47.750227+0000 mon.a (mon.0) 946 : audit [INF] from='client.? 192.168.123.107:0/4261167445' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:48 vm10 bash[23387]: audit 2026-03-09T21:19:47.750227+0000 mon.a (mon.0) 946 : audit [INF] from='client.? 192.168.123.107:0/4261167445' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:48 vm10 bash[23387]: cluster 2026-03-09T21:19:47.757215+0000 mon.a (mon.0) 947 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-09T21:19:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:48 vm10 bash[23387]: cluster 2026-03-09T21:19:47.757215+0000 mon.a (mon.0) 947 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-09T21:19:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:48 vm10 bash[23387]: cluster 2026-03-09T21:19:47.760892+0000 mgr.y (mgr.24416) 148 : cluster [DBG] pgmap v180: 196 pgs: 6 unknown, 190 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T21:19:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:48 vm10 bash[23387]: cluster 2026-03-09T21:19:47.760892+0000 mgr.y (mgr.24416) 148 : cluster [DBG] pgmap v180: 196 pgs: 6 unknown, 190 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-09T21:19:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:48 vm10 bash[23387]: cluster 2026-03-09T21:19:48.757034+0000 mon.a (mon.0) 948 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-09T21:19:49.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:48 vm10 bash[23387]: cluster 2026-03-09T21:19:48.757034+0000 mon.a (mon.0) 948 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-09T21:19:50.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:49 vm07 bash[20771]: cluster 2026-03-09T21:19:49.779176+0000 mon.a (mon.0) 949 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:19:50.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:49 vm07 bash[20771]: cluster 2026-03-09T21:19:49.779176+0000 mon.a (mon.0) 949 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:19:50.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:49 vm07 bash[20771]: cluster 2026-03-09T21:19:49.790114+0000 mon.a (mon.0) 950 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-09T21:19:50.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:49 vm07 bash[20771]: cluster 2026-03-09T21:19:49.790114+0000 mon.a (mon.0) 950 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-09T21:19:50.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:49 vm07 bash[20771]: audit 2026-03-09T21:19:49.799422+0000 mon.c (mon.2) 80 : audit [INF] from='client.? 192.168.123.107:0/2013131591' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:50.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:49 vm07 bash[20771]: audit 2026-03-09T21:19:49.799422+0000 mon.c (mon.2) 80 : audit [INF] from='client.? 192.168.123.107:0/2013131591' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:50.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:49 vm07 bash[20771]: audit 2026-03-09T21:19:49.799734+0000 mon.a (mon.0) 951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:50.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:49 vm07 bash[20771]: audit 2026-03-09T21:19:49.799734+0000 mon.a (mon.0) 951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:49 vm07 bash[28052]: cluster 2026-03-09T21:19:49.779176+0000 mon.a (mon.0) 949 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:19:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:49 vm07 bash[28052]: cluster 2026-03-09T21:19:49.779176+0000 mon.a (mon.0) 949 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:19:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:49 vm07 bash[28052]: cluster 2026-03-09T21:19:49.790114+0000 mon.a (mon.0) 950 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-09T21:19:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:49 vm07 bash[28052]: cluster 2026-03-09T21:19:49.790114+0000 mon.a (mon.0) 950 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-09T21:19:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:49 vm07 bash[28052]: audit 2026-03-09T21:19:49.799422+0000 mon.c (mon.2) 80 : audit [INF] from='client.? 192.168.123.107:0/2013131591' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:49 vm07 bash[28052]: audit 2026-03-09T21:19:49.799422+0000 mon.c (mon.2) 80 : audit [INF] from='client.? 192.168.123.107:0/2013131591' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:49 vm07 bash[28052]: audit 2026-03-09T21:19:49.799734+0000 mon.a (mon.0) 951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:49 vm07 bash[28052]: audit 2026-03-09T21:19:49.799734+0000 mon.a (mon.0) 951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:49 vm10 bash[23387]: cluster 2026-03-09T21:19:49.779176+0000 mon.a (mon.0) 949 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:19:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:49 vm10 bash[23387]: cluster 2026-03-09T21:19:49.779176+0000 mon.a (mon.0) 949 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:19:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:49 vm10 bash[23387]: cluster 2026-03-09T21:19:49.790114+0000 mon.a (mon.0) 950 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-09T21:19:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:49 vm10 bash[23387]: cluster 2026-03-09T21:19:49.790114+0000 mon.a (mon.0) 950 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-09T21:19:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:49 vm10 bash[23387]: audit 2026-03-09T21:19:49.799422+0000 mon.c (mon.2) 80 : audit [INF] from='client.? 192.168.123.107:0/2013131591' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:49 vm10 bash[23387]: audit 2026-03-09T21:19:49.799422+0000 mon.c (mon.2) 80 : audit [INF] from='client.? 192.168.123.107:0/2013131591' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:49 vm10 bash[23387]: audit 2026-03-09T21:19:49.799734+0000 mon.a (mon.0) 951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:49 vm10 bash[23387]: audit 2026-03-09T21:19:49.799734+0000 mon.a (mon.0) 951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:51.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:50 vm07 bash[20771]: cluster 2026-03-09T21:19:49.761176+0000 mgr.y (mgr.24416) 149 : cluster [DBG] pgmap v182: 164 pgs: 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:51.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:50 vm07 bash[20771]: cluster 2026-03-09T21:19:49.761176+0000 mgr.y (mgr.24416) 149 : cluster [DBG] pgmap v182: 164 pgs: 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:51.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:50 vm07 bash[20771]: audit 2026-03-09T21:19:50.789547+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:51.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:50 vm07 bash[20771]: audit 2026-03-09T21:19:50.789547+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:51.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:50 vm07 bash[20771]: cluster 2026-03-09T21:19:50.795127+0000 mon.a (mon.0) 953 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-09T21:19:51.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:50 vm07 bash[20771]: cluster 2026-03-09T21:19:50.795127+0000 mon.a (mon.0) 953 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-09T21:19:51.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:50 vm07 bash[28052]: cluster 2026-03-09T21:19:49.761176+0000 mgr.y (mgr.24416) 149 : cluster [DBG] pgmap v182: 164 pgs: 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:51.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:50 vm07 bash[28052]: cluster 2026-03-09T21:19:49.761176+0000 mgr.y (mgr.24416) 149 : cluster [DBG] pgmap v182: 164 pgs: 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:51.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:50 vm07 bash[28052]: audit 2026-03-09T21:19:50.789547+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:51.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:50 vm07 bash[28052]: audit 2026-03-09T21:19:50.789547+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:51.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:50 vm07 bash[28052]: cluster 2026-03-09T21:19:50.795127+0000 mon.a (mon.0) 953 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-09T21:19:51.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:50 vm07 bash[28052]: cluster 2026-03-09T21:19:50.795127+0000 mon.a (mon.0) 953 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-09T21:19:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:50 vm10 bash[23387]: cluster 2026-03-09T21:19:49.761176+0000 mgr.y (mgr.24416) 149 : cluster [DBG] pgmap v182: 164 pgs: 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:50 vm10 bash[23387]: cluster 2026-03-09T21:19:49.761176+0000 mgr.y (mgr.24416) 149 : cluster [DBG] pgmap v182: 164 pgs: 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:50 vm10 bash[23387]: audit 2026-03-09T21:19:50.789547+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:50 vm10 bash[23387]: audit 2026-03-09T21:19:50.789547+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:50 vm10 bash[23387]: cluster 2026-03-09T21:19:50.795127+0000 mon.a (mon.0) 953 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-09T21:19:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:50 vm10 bash[23387]: cluster 2026-03-09T21:19:50.795127+0000 mon.a (mon.0) 953 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-09T21:19:51.809 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_get_pool_id PASSED [ 39%] 2026-03-09T21:19:53.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:52 vm07 bash[20771]: cluster 2026-03-09T21:19:51.761448+0000 mgr.y (mgr.24416) 150 : cluster [DBG] pgmap v185: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:53.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:52 vm07 bash[20771]: cluster 2026-03-09T21:19:51.761448+0000 mgr.y (mgr.24416) 150 : cluster [DBG] pgmap v185: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:53.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:52 vm07 bash[20771]: cluster 2026-03-09T21:19:51.804454+0000 mon.a (mon.0) 954 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-09T21:19:53.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:52 vm07 bash[20771]: cluster 2026-03-09T21:19:51.804454+0000 mon.a (mon.0) 954 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-09T21:19:53.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:52 vm07 bash[28052]: cluster 2026-03-09T21:19:51.761448+0000 mgr.y (mgr.24416) 150 : cluster [DBG] pgmap v185: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:53.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:52 vm07 bash[28052]: cluster 2026-03-09T21:19:51.761448+0000 mgr.y (mgr.24416) 150 : cluster [DBG] pgmap v185: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:53.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:52 vm07 bash[28052]: cluster 2026-03-09T21:19:51.804454+0000 mon.a (mon.0) 954 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-09T21:19:53.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:52 vm07 bash[28052]: cluster 2026-03-09T21:19:51.804454+0000 mon.a (mon.0) 954 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-09T21:19:53.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:52 vm10 bash[23387]: cluster 2026-03-09T21:19:51.761448+0000 mgr.y (mgr.24416) 150 : cluster [DBG] pgmap v185: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:53.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:52 vm10 bash[23387]: cluster 2026-03-09T21:19:51.761448+0000 mgr.y (mgr.24416) 150 : cluster [DBG] pgmap v185: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:53.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:52 vm10 bash[23387]: cluster 2026-03-09T21:19:51.804454+0000 mon.a (mon.0) 954 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-09T21:19:53.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:52 vm10 bash[23387]: cluster 2026-03-09T21:19:51.804454+0000 mon.a (mon.0) 954 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-09T21:19:54.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:53 vm07 bash[20771]: cluster 2026-03-09T21:19:52.823542+0000 mon.a (mon.0) 955 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-09T21:19:54.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:53 vm07 bash[20771]: cluster 2026-03-09T21:19:52.823542+0000 mon.a (mon.0) 955 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-09T21:19:54.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:53 vm07 bash[20771]: audit 2026-03-09T21:19:52.831244+0000 mon.a (mon.0) 956 : audit [INF] from='client.? 192.168.123.107:0/1767333636' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:54.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:53 vm07 bash[20771]: audit 2026-03-09T21:19:52.831244+0000 mon.a (mon.0) 956 : audit [INF] from='client.? 192.168.123.107:0/1767333636' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:54.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:53 vm07 bash[28052]: cluster 2026-03-09T21:19:52.823542+0000 mon.a (mon.0) 955 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-09T21:19:54.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:53 vm07 bash[28052]: cluster 2026-03-09T21:19:52.823542+0000 mon.a (mon.0) 955 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-09T21:19:54.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:53 vm07 bash[28052]: audit 2026-03-09T21:19:52.831244+0000 mon.a (mon.0) 956 : audit [INF] from='client.? 192.168.123.107:0/1767333636' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:54.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:53 vm07 bash[28052]: audit 2026-03-09T21:19:52.831244+0000 mon.a (mon.0) 956 : audit [INF] from='client.? 192.168.123.107:0/1767333636' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:54.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:53 vm10 bash[23387]: cluster 2026-03-09T21:19:52.823542+0000 mon.a (mon.0) 955 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-09T21:19:54.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:53 vm10 bash[23387]: cluster 2026-03-09T21:19:52.823542+0000 mon.a (mon.0) 955 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-09T21:19:54.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:53 vm10 bash[23387]: audit 2026-03-09T21:19:52.831244+0000 mon.a (mon.0) 956 : audit [INF] from='client.? 192.168.123.107:0/1767333636' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:54.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:53 vm10 bash[23387]: audit 2026-03-09T21:19:52.831244+0000 mon.a (mon.0) 956 : audit [INF] from='client.? 192.168.123.107:0/1767333636' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:54.837 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_get_pool_name PASSED [ 40%] 2026-03-09T21:19:55.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:54 vm07 bash[20771]: cluster 2026-03-09T21:19:53.761770+0000 mgr.y (mgr.24416) 151 : cluster [DBG] pgmap v188: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:55.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:54 vm07 bash[20771]: cluster 2026-03-09T21:19:53.761770+0000 mgr.y (mgr.24416) 151 : cluster [DBG] pgmap v188: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:55.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:54 vm07 bash[20771]: audit 2026-03-09T21:19:53.828510+0000 mon.a (mon.0) 957 : audit [INF] from='client.? 192.168.123.107:0/1767333636' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:55.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:54 vm07 bash[20771]: audit 2026-03-09T21:19:53.828510+0000 mon.a (mon.0) 957 : audit [INF] from='client.? 192.168.123.107:0/1767333636' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:55.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:54 vm07 bash[20771]: cluster 2026-03-09T21:19:53.835937+0000 mon.a (mon.0) 958 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-09T21:19:55.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:54 vm07 bash[20771]: cluster 2026-03-09T21:19:53.835937+0000 mon.a (mon.0) 958 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-09T21:19:55.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:54 vm07 bash[28052]: cluster 2026-03-09T21:19:53.761770+0000 mgr.y (mgr.24416) 151 : cluster [DBG] pgmap v188: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:55.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:54 vm07 bash[28052]: cluster 2026-03-09T21:19:53.761770+0000 mgr.y (mgr.24416) 151 : cluster [DBG] pgmap v188: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:55.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:54 vm07 bash[28052]: audit 2026-03-09T21:19:53.828510+0000 mon.a (mon.0) 957 : audit [INF] from='client.? 192.168.123.107:0/1767333636' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:55.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:54 vm07 bash[28052]: audit 2026-03-09T21:19:53.828510+0000 mon.a (mon.0) 957 : audit [INF] from='client.? 192.168.123.107:0/1767333636' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:55.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:54 vm07 bash[28052]: cluster 2026-03-09T21:19:53.835937+0000 mon.a (mon.0) 958 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-09T21:19:55.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:54 vm07 bash[28052]: cluster 2026-03-09T21:19:53.835937+0000 mon.a (mon.0) 958 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-09T21:19:55.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:54 vm10 bash[23387]: cluster 2026-03-09T21:19:53.761770+0000 mgr.y (mgr.24416) 151 : cluster [DBG] pgmap v188: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:55.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:54 vm10 bash[23387]: cluster 2026-03-09T21:19:53.761770+0000 mgr.y (mgr.24416) 151 : cluster [DBG] pgmap v188: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:55.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:54 vm10 bash[23387]: audit 2026-03-09T21:19:53.828510+0000 mon.a (mon.0) 957 : audit [INF] from='client.? 192.168.123.107:0/1767333636' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:55.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:54 vm10 bash[23387]: audit 2026-03-09T21:19:53.828510+0000 mon.a (mon.0) 957 : audit [INF] from='client.? 192.168.123.107:0/1767333636' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:19:55.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:54 vm10 bash[23387]: cluster 2026-03-09T21:19:53.835937+0000 mon.a (mon.0) 958 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-09T21:19:55.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:54 vm10 bash[23387]: cluster 2026-03-09T21:19:53.835937+0000 mon.a (mon.0) 958 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-09T21:19:56.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:55 vm07 bash[20771]: cluster 2026-03-09T21:19:54.835166+0000 mon.a (mon.0) 959 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-09T21:19:56.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:55 vm07 bash[20771]: cluster 2026-03-09T21:19:54.835166+0000 mon.a (mon.0) 959 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-09T21:19:56.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:55 vm07 bash[28052]: cluster 2026-03-09T21:19:54.835166+0000 mon.a (mon.0) 959 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-09T21:19:56.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:55 vm07 bash[28052]: cluster 2026-03-09T21:19:54.835166+0000 mon.a (mon.0) 959 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-09T21:19:56.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:55 vm10 bash[23387]: cluster 2026-03-09T21:19:54.835166+0000 mon.a (mon.0) 959 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-09T21:19:56.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:55 vm10 bash[23387]: cluster 2026-03-09T21:19:54.835166+0000 mon.a (mon.0) 959 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-09T21:19:56.692 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:19:56 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:19:56.863 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:56 vm10 bash[23387]: cluster 2026-03-09T21:19:55.762031+0000 mgr.y (mgr.24416) 152 : cluster [DBG] pgmap v191: 164 pgs: 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:56.863 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:56 vm10 bash[23387]: cluster 2026-03-09T21:19:55.762031+0000 mgr.y (mgr.24416) 152 : cluster [DBG] pgmap v191: 164 pgs: 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:57.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:56 vm07 bash[20771]: cluster 2026-03-09T21:19:55.762031+0000 mgr.y (mgr.24416) 152 : cluster [DBG] pgmap v191: 164 pgs: 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:57.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:56 vm07 bash[20771]: cluster 2026-03-09T21:19:55.762031+0000 mgr.y (mgr.24416) 152 : cluster [DBG] pgmap v191: 164 pgs: 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:57.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:56 vm07 bash[20771]: cluster 2026-03-09T21:19:55.843662+0000 mon.a (mon.0) 960 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:19:57.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:56 vm07 bash[20771]: cluster 2026-03-09T21:19:55.843662+0000 mon.a (mon.0) 960 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:19:57.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:56 vm07 bash[20771]: cluster 2026-03-09T21:19:55.880653+0000 mon.a (mon.0) 961 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-09T21:19:57.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:56 vm07 bash[20771]: cluster 2026-03-09T21:19:55.880653+0000 mon.a (mon.0) 961 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-09T21:19:57.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:56 vm07 bash[28052]: cluster 2026-03-09T21:19:55.762031+0000 mgr.y (mgr.24416) 152 : cluster [DBG] pgmap v191: 164 pgs: 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:57.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:56 vm07 bash[28052]: cluster 2026-03-09T21:19:55.762031+0000 mgr.y (mgr.24416) 152 : cluster [DBG] pgmap v191: 164 pgs: 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:19:57.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:56 vm07 bash[28052]: cluster 2026-03-09T21:19:55.843662+0000 mon.a (mon.0) 960 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:19:57.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:56 vm07 bash[28052]: cluster 2026-03-09T21:19:55.843662+0000 mon.a (mon.0) 960 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:19:57.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:56 vm07 bash[28052]: cluster 2026-03-09T21:19:55.880653+0000 mon.a (mon.0) 961 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-09T21:19:57.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:56 vm07 bash[28052]: cluster 2026-03-09T21:19:55.880653+0000 mon.a (mon.0) 961 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-09T21:19:57.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:56 vm10 bash[23387]: cluster 2026-03-09T21:19:55.843662+0000 mon.a (mon.0) 960 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:19:57.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:56 vm10 bash[23387]: cluster 2026-03-09T21:19:55.843662+0000 mon.a (mon.0) 960 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:19:57.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:56 vm10 bash[23387]: cluster 2026-03-09T21:19:55.880653+0000 mon.a (mon.0) 961 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-09T21:19:57.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:56 vm10 bash[23387]: cluster 2026-03-09T21:19:55.880653+0000 mon.a (mon.0) 961 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-09T21:19:58.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:57 vm10 bash[23387]: audit 2026-03-09T21:19:56.329273+0000 mgr.y (mgr.24416) 153 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:58.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:57 vm10 bash[23387]: audit 2026-03-09T21:19:56.329273+0000 mgr.y (mgr.24416) 153 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:58.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:57 vm10 bash[23387]: cluster 2026-03-09T21:19:56.890877+0000 mon.a (mon.0) 962 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-09T21:19:58.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:57 vm10 bash[23387]: cluster 2026-03-09T21:19:56.890877+0000 mon.a (mon.0) 962 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-09T21:19:58.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:57 vm10 bash[23387]: audit 2026-03-09T21:19:56.974153+0000 mon.c (mon.2) 81 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:19:58.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:57 vm10 bash[23387]: audit 2026-03-09T21:19:56.974153+0000 mon.c (mon.2) 81 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:19:58.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:57 vm07 bash[20771]: audit 2026-03-09T21:19:56.329273+0000 mgr.y (mgr.24416) 153 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:58.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:57 vm07 bash[20771]: audit 2026-03-09T21:19:56.329273+0000 mgr.y (mgr.24416) 153 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:58.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:57 vm07 bash[20771]: cluster 2026-03-09T21:19:56.890877+0000 mon.a (mon.0) 962 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-09T21:19:58.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:57 vm07 bash[20771]: cluster 2026-03-09T21:19:56.890877+0000 mon.a (mon.0) 962 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-09T21:19:58.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:57 vm07 bash[20771]: audit 2026-03-09T21:19:56.974153+0000 mon.c (mon.2) 81 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:19:58.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:57 vm07 bash[20771]: audit 2026-03-09T21:19:56.974153+0000 mon.c (mon.2) 81 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:19:58.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:57 vm07 bash[28052]: audit 2026-03-09T21:19:56.329273+0000 mgr.y (mgr.24416) 153 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:58.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:57 vm07 bash[28052]: audit 2026-03-09T21:19:56.329273+0000 mgr.y (mgr.24416) 153 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:19:58.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:57 vm07 bash[28052]: cluster 2026-03-09T21:19:56.890877+0000 mon.a (mon.0) 962 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-09T21:19:58.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:57 vm07 bash[28052]: cluster 2026-03-09T21:19:56.890877+0000 mon.a (mon.0) 962 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-09T21:19:58.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:57 vm07 bash[28052]: audit 2026-03-09T21:19:56.974153+0000 mon.c (mon.2) 81 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:19:58.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:57 vm07 bash[28052]: audit 2026-03-09T21:19:56.974153+0000 mon.c (mon.2) 81 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:19:58.901 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:58 vm07 bash[20771]: cluster 2026-03-09T21:19:57.762463+0000 mgr.y (mgr.24416) 154 : cluster [DBG] pgmap v194: 196 pgs: 5 unknown, 191 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:58.901 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:58 vm07 bash[20771]: cluster 2026-03-09T21:19:57.762463+0000 mgr.y (mgr.24416) 154 : cluster [DBG] pgmap v194: 196 pgs: 5 unknown, 191 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:58.901 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:58 vm07 bash[20771]: cluster 2026-03-09T21:19:57.907010+0000 mon.a (mon.0) 963 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-09T21:19:58.901 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:58 vm07 bash[20771]: cluster 2026-03-09T21:19:57.907010+0000 mon.a (mon.0) 963 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-09T21:19:58.901 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:19:58 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:19:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:19:59.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:58 vm10 bash[23387]: cluster 2026-03-09T21:19:57.762463+0000 mgr.y (mgr.24416) 154 : cluster [DBG] pgmap v194: 196 pgs: 5 unknown, 191 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:59.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:58 vm10 bash[23387]: cluster 2026-03-09T21:19:57.762463+0000 mgr.y (mgr.24416) 154 : cluster [DBG] pgmap v194: 196 pgs: 5 unknown, 191 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:59.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:58 vm10 bash[23387]: cluster 2026-03-09T21:19:57.907010+0000 mon.a (mon.0) 963 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-09T21:19:59.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:58 vm10 bash[23387]: cluster 2026-03-09T21:19:57.907010+0000 mon.a (mon.0) 963 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-09T21:19:59.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:58 vm10 bash[23387]: audit 2026-03-09T21:19:57.908615+0000 mon.c (mon.2) 82 : audit [INF] from='client.? 192.168.123.107:0/807692438' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:59.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:58 vm10 bash[23387]: audit 2026-03-09T21:19:57.908615+0000 mon.c (mon.2) 82 : audit [INF] from='client.? 192.168.123.107:0/807692438' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:59.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:58 vm10 bash[23387]: audit 2026-03-09T21:19:57.909164+0000 mon.a (mon.0) 964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:59.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:58 vm10 bash[23387]: audit 2026-03-09T21:19:57.909164+0000 mon.a (mon.0) 964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:59.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:58 vm07 bash[20771]: audit 2026-03-09T21:19:57.908615+0000 mon.c (mon.2) 82 : audit [INF] from='client.? 192.168.123.107:0/807692438' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:59.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:58 vm07 bash[20771]: audit 2026-03-09T21:19:57.908615+0000 mon.c (mon.2) 82 : audit [INF] from='client.? 192.168.123.107:0/807692438' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:59.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:58 vm07 bash[20771]: audit 2026-03-09T21:19:57.909164+0000 mon.a (mon.0) 964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:59.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:58 vm07 bash[20771]: audit 2026-03-09T21:19:57.909164+0000 mon.a (mon.0) 964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:59.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:58 vm07 bash[28052]: cluster 2026-03-09T21:19:57.762463+0000 mgr.y (mgr.24416) 154 : cluster [DBG] pgmap v194: 196 pgs: 5 unknown, 191 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:59.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:58 vm07 bash[28052]: cluster 2026-03-09T21:19:57.762463+0000 mgr.y (mgr.24416) 154 : cluster [DBG] pgmap v194: 196 pgs: 5 unknown, 191 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:19:59.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:58 vm07 bash[28052]: cluster 2026-03-09T21:19:57.907010+0000 mon.a (mon.0) 963 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-09T21:19:59.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:58 vm07 bash[28052]: cluster 2026-03-09T21:19:57.907010+0000 mon.a (mon.0) 963 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-09T21:19:59.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:58 vm07 bash[28052]: audit 2026-03-09T21:19:57.908615+0000 mon.c (mon.2) 82 : audit [INF] from='client.? 192.168.123.107:0/807692438' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:59.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:58 vm07 bash[28052]: audit 2026-03-09T21:19:57.908615+0000 mon.c (mon.2) 82 : audit [INF] from='client.? 192.168.123.107:0/807692438' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:59.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:58 vm07 bash[28052]: audit 2026-03-09T21:19:57.909164+0000 mon.a (mon.0) 964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:59.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:58 vm07 bash[28052]: audit 2026-03-09T21:19:57.909164+0000 mon.a (mon.0) 964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:19:59.929 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_create_snap PASSED [ 41%] 2026-03-09T21:20:00.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:59 vm10 bash[23387]: audit 2026-03-09T21:19:58.889918+0000 mon.a (mon.0) 965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:00.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:59 vm10 bash[23387]: audit 2026-03-09T21:19:58.889918+0000 mon.a (mon.0) 965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:00.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:59 vm10 bash[23387]: cluster 2026-03-09T21:19:58.894210+0000 mon.a (mon.0) 966 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-09T21:20:00.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:19:59 vm10 bash[23387]: cluster 2026-03-09T21:19:58.894210+0000 mon.a (mon.0) 966 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-09T21:20:00.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:59 vm07 bash[20771]: audit 2026-03-09T21:19:58.889918+0000 mon.a (mon.0) 965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:00.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:59 vm07 bash[20771]: audit 2026-03-09T21:19:58.889918+0000 mon.a (mon.0) 965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:00.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:59 vm07 bash[20771]: cluster 2026-03-09T21:19:58.894210+0000 mon.a (mon.0) 966 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-09T21:20:00.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:19:59 vm07 bash[20771]: cluster 2026-03-09T21:19:58.894210+0000 mon.a (mon.0) 966 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-09T21:20:00.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:59 vm07 bash[28052]: audit 2026-03-09T21:19:58.889918+0000 mon.a (mon.0) 965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:00.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:59 vm07 bash[28052]: audit 2026-03-09T21:19:58.889918+0000 mon.a (mon.0) 965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:00.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:59 vm07 bash[28052]: cluster 2026-03-09T21:19:58.894210+0000 mon.a (mon.0) 966 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-09T21:20:00.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:19:59 vm07 bash[28052]: cluster 2026-03-09T21:19:58.894210+0000 mon.a (mon.0) 966 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-09T21:20:01.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:00 vm07 bash[20771]: cluster 2026-03-09T21:19:59.762754+0000 mgr.y (mgr.24416) 155 : cluster [DBG] pgmap v197: 196 pgs: 196 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:01.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:00 vm07 bash[20771]: cluster 2026-03-09T21:19:59.762754+0000 mgr.y (mgr.24416) 155 : cluster [DBG] pgmap v197: 196 pgs: 196 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:01.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:00 vm07 bash[20771]: cluster 2026-03-09T21:19:59.917288+0000 mon.a (mon.0) 967 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-09T21:20:01.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:00 vm07 bash[20771]: cluster 2026-03-09T21:19:59.917288+0000 mon.a (mon.0) 967 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-09T21:20:01.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:00 vm07 bash[20771]: cluster 2026-03-09T21:20:00.000129+0000 mon.a (mon.0) 968 : cluster [WRN] Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled 2026-03-09T21:20:01.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:00 vm07 bash[20771]: cluster 2026-03-09T21:20:00.000129+0000 mon.a (mon.0) 968 : cluster [WRN] Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled 2026-03-09T21:20:01.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:00 vm07 bash[20771]: cluster 2026-03-09T21:20:00.000154+0000 mon.a (mon.0) 969 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 2 pool(s) do not have an application enabled 2026-03-09T21:20:01.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:00 vm07 bash[20771]: cluster 2026-03-09T21:20:00.000154+0000 mon.a (mon.0) 969 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 2 pool(s) do not have an application enabled 2026-03-09T21:20:01.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:00 vm07 bash[20771]: cluster 2026-03-09T21:20:00.000172+0000 mon.a (mon.0) 970 : cluster [WRN] application not enabled on pool 'rbd' 2026-03-09T21:20:01.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:00 vm07 bash[20771]: cluster 2026-03-09T21:20:00.000172+0000 mon.a (mon.0) 970 : cluster [WRN] application not enabled on pool 'rbd' 2026-03-09T21:20:01.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:00 vm07 bash[20771]: cluster 2026-03-09T21:20:00.000180+0000 mon.a (mon.0) 971 : cluster [WRN] application not enabled on pool 'test_pool' 2026-03-09T21:20:01.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:00 vm07 bash[20771]: cluster 2026-03-09T21:20:00.000180+0000 mon.a (mon.0) 971 : cluster [WRN] application not enabled on pool 'test_pool' 2026-03-09T21:20:01.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:00 vm07 bash[20771]: cluster 2026-03-09T21:20:00.000187+0000 mon.a (mon.0) 972 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T21:20:01.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:00 vm07 bash[20771]: cluster 2026-03-09T21:20:00.000187+0000 mon.a (mon.0) 972 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T21:20:01.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:00 vm07 bash[28052]: cluster 2026-03-09T21:19:59.762754+0000 mgr.y (mgr.24416) 155 : cluster [DBG] pgmap v197: 196 pgs: 196 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:01.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:00 vm07 bash[28052]: cluster 2026-03-09T21:19:59.762754+0000 mgr.y (mgr.24416) 155 : cluster [DBG] pgmap v197: 196 pgs: 196 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:01.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:00 vm07 bash[28052]: cluster 2026-03-09T21:19:59.917288+0000 mon.a (mon.0) 967 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-09T21:20:01.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:00 vm07 bash[28052]: cluster 2026-03-09T21:19:59.917288+0000 mon.a (mon.0) 967 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-09T21:20:01.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:00 vm07 bash[28052]: cluster 2026-03-09T21:20:00.000129+0000 mon.a (mon.0) 968 : cluster [WRN] Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled 2026-03-09T21:20:01.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:00 vm07 bash[28052]: cluster 2026-03-09T21:20:00.000129+0000 mon.a (mon.0) 968 : cluster [WRN] Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled 2026-03-09T21:20:01.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:00 vm07 bash[28052]: cluster 2026-03-09T21:20:00.000154+0000 mon.a (mon.0) 969 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 2 pool(s) do not have an application enabled 2026-03-09T21:20:01.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:00 vm07 bash[28052]: cluster 2026-03-09T21:20:00.000154+0000 mon.a (mon.0) 969 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 2 pool(s) do not have an application enabled 2026-03-09T21:20:01.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:00 vm07 bash[28052]: cluster 2026-03-09T21:20:00.000172+0000 mon.a (mon.0) 970 : cluster [WRN] application not enabled on pool 'rbd' 2026-03-09T21:20:01.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:00 vm07 bash[28052]: cluster 2026-03-09T21:20:00.000172+0000 mon.a (mon.0) 970 : cluster [WRN] application not enabled on pool 'rbd' 2026-03-09T21:20:01.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:00 vm07 bash[28052]: cluster 2026-03-09T21:20:00.000180+0000 mon.a (mon.0) 971 : cluster [WRN] application not enabled on pool 'test_pool' 2026-03-09T21:20:01.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:00 vm07 bash[28052]: cluster 2026-03-09T21:20:00.000180+0000 mon.a (mon.0) 971 : cluster [WRN] application not enabled on pool 'test_pool' 2026-03-09T21:20:01.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:00 vm07 bash[28052]: cluster 2026-03-09T21:20:00.000187+0000 mon.a (mon.0) 972 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T21:20:01.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:00 vm07 bash[28052]: cluster 2026-03-09T21:20:00.000187+0000 mon.a (mon.0) 972 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T21:20:01.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:00 vm10 bash[23387]: cluster 2026-03-09T21:19:59.762754+0000 mgr.y (mgr.24416) 155 : cluster [DBG] pgmap v197: 196 pgs: 196 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:01.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:00 vm10 bash[23387]: cluster 2026-03-09T21:19:59.762754+0000 mgr.y (mgr.24416) 155 : cluster [DBG] pgmap v197: 196 pgs: 196 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:01.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:00 vm10 bash[23387]: cluster 2026-03-09T21:19:59.917288+0000 mon.a (mon.0) 967 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-09T21:20:01.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:00 vm10 bash[23387]: cluster 2026-03-09T21:19:59.917288+0000 mon.a (mon.0) 967 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-09T21:20:01.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:00 vm10 bash[23387]: cluster 2026-03-09T21:20:00.000129+0000 mon.a (mon.0) 968 : cluster [WRN] Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled 2026-03-09T21:20:01.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:00 vm10 bash[23387]: cluster 2026-03-09T21:20:00.000129+0000 mon.a (mon.0) 968 : cluster [WRN] Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled 2026-03-09T21:20:01.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:00 vm10 bash[23387]: cluster 2026-03-09T21:20:00.000154+0000 mon.a (mon.0) 969 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 2 pool(s) do not have an application enabled 2026-03-09T21:20:01.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:00 vm10 bash[23387]: cluster 2026-03-09T21:20:00.000154+0000 mon.a (mon.0) 969 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 2 pool(s) do not have an application enabled 2026-03-09T21:20:01.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:00 vm10 bash[23387]: cluster 2026-03-09T21:20:00.000172+0000 mon.a (mon.0) 970 : cluster [WRN] application not enabled on pool 'rbd' 2026-03-09T21:20:01.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:00 vm10 bash[23387]: cluster 2026-03-09T21:20:00.000172+0000 mon.a (mon.0) 970 : cluster [WRN] application not enabled on pool 'rbd' 2026-03-09T21:20:01.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:00 vm10 bash[23387]: cluster 2026-03-09T21:20:00.000180+0000 mon.a (mon.0) 971 : cluster [WRN] application not enabled on pool 'test_pool' 2026-03-09T21:20:01.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:00 vm10 bash[23387]: cluster 2026-03-09T21:20:00.000180+0000 mon.a (mon.0) 971 : cluster [WRN] application not enabled on pool 'test_pool' 2026-03-09T21:20:01.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:00 vm10 bash[23387]: cluster 2026-03-09T21:20:00.000187+0000 mon.a (mon.0) 972 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T21:20:01.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:00 vm10 bash[23387]: cluster 2026-03-09T21:20:00.000187+0000 mon.a (mon.0) 972 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T21:20:02.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:02 vm07 bash[20771]: cluster 2026-03-09T21:20:00.992571+0000 mon.a (mon.0) 973 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-09T21:20:02.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:02 vm07 bash[20771]: cluster 2026-03-09T21:20:00.992571+0000 mon.a (mon.0) 973 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-09T21:20:02.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:02 vm07 bash[20771]: audit 2026-03-09T21:20:00.997260+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.107:0/3884523860' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:02.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:02 vm07 bash[20771]: audit 2026-03-09T21:20:00.997260+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.107:0/3884523860' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:02.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:02 vm07 bash[20771]: audit 2026-03-09T21:20:00.999251+0000 mon.a (mon.0) 974 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:02.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:02 vm07 bash[20771]: audit 2026-03-09T21:20:00.999251+0000 mon.a (mon.0) 974 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:02.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:02 vm07 bash[20771]: cluster 2026-03-09T21:20:01.763019+0000 mgr.y (mgr.24416) 156 : cluster [DBG] pgmap v200: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:02.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:02 vm07 bash[20771]: cluster 2026-03-09T21:20:01.763019+0000 mgr.y (mgr.24416) 156 : cluster [DBG] pgmap v200: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:02.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:02 vm07 bash[28052]: cluster 2026-03-09T21:20:00.992571+0000 mon.a (mon.0) 973 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-09T21:20:02.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:02 vm07 bash[28052]: cluster 2026-03-09T21:20:00.992571+0000 mon.a (mon.0) 973 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-09T21:20:02.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:02 vm07 bash[28052]: audit 2026-03-09T21:20:00.997260+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.107:0/3884523860' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:02.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:02 vm07 bash[28052]: audit 2026-03-09T21:20:00.997260+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.107:0/3884523860' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:02.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:02 vm07 bash[28052]: audit 2026-03-09T21:20:00.999251+0000 mon.a (mon.0) 974 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:02.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:02 vm07 bash[28052]: audit 2026-03-09T21:20:00.999251+0000 mon.a (mon.0) 974 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:02.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:02 vm07 bash[28052]: cluster 2026-03-09T21:20:01.763019+0000 mgr.y (mgr.24416) 156 : cluster [DBG] pgmap v200: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:02.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:02 vm07 bash[28052]: cluster 2026-03-09T21:20:01.763019+0000 mgr.y (mgr.24416) 156 : cluster [DBG] pgmap v200: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:02.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:02 vm10 bash[23387]: cluster 2026-03-09T21:20:00.992571+0000 mon.a (mon.0) 973 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-09T21:20:02.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:02 vm10 bash[23387]: cluster 2026-03-09T21:20:00.992571+0000 mon.a (mon.0) 973 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-09T21:20:02.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:02 vm10 bash[23387]: audit 2026-03-09T21:20:00.997260+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.107:0/3884523860' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:02.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:02 vm10 bash[23387]: audit 2026-03-09T21:20:00.997260+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.107:0/3884523860' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:02.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:02 vm10 bash[23387]: audit 2026-03-09T21:20:00.999251+0000 mon.a (mon.0) 974 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:02.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:02 vm10 bash[23387]: audit 2026-03-09T21:20:00.999251+0000 mon.a (mon.0) 974 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:02.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:02 vm10 bash[23387]: cluster 2026-03-09T21:20:01.763019+0000 mgr.y (mgr.24416) 156 : cluster [DBG] pgmap v200: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:02.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:02 vm10 bash[23387]: cluster 2026-03-09T21:20:01.763019+0000 mgr.y (mgr.24416) 156 : cluster [DBG] pgmap v200: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:03.013 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_list_snaps_empty PASSED [ 42%] 2026-03-09T21:20:03.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:03 vm07 bash[20771]: audit 2026-03-09T21:20:01.999295+0000 mon.a (mon.0) 975 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:03.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:03 vm07 bash[20771]: audit 2026-03-09T21:20:01.999295+0000 mon.a (mon.0) 975 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:03.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:03 vm07 bash[20771]: cluster 2026-03-09T21:20:02.030963+0000 mon.a (mon.0) 976 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-09T21:20:03.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:03 vm07 bash[20771]: cluster 2026-03-09T21:20:02.030963+0000 mon.a (mon.0) 976 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-09T21:20:03.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:03 vm07 bash[28052]: audit 2026-03-09T21:20:01.999295+0000 mon.a (mon.0) 975 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:03.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:03 vm07 bash[28052]: audit 2026-03-09T21:20:01.999295+0000 mon.a (mon.0) 975 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:03.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:03 vm07 bash[28052]: cluster 2026-03-09T21:20:02.030963+0000 mon.a (mon.0) 976 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-09T21:20:03.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:03 vm07 bash[28052]: cluster 2026-03-09T21:20:02.030963+0000 mon.a (mon.0) 976 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-09T21:20:03.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:03 vm10 bash[23387]: audit 2026-03-09T21:20:01.999295+0000 mon.a (mon.0) 975 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:03.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:03 vm10 bash[23387]: audit 2026-03-09T21:20:01.999295+0000 mon.a (mon.0) 975 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:03.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:03 vm10 bash[23387]: cluster 2026-03-09T21:20:02.030963+0000 mon.a (mon.0) 976 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-09T21:20:03.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:03 vm10 bash[23387]: cluster 2026-03-09T21:20:02.030963+0000 mon.a (mon.0) 976 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-09T21:20:04.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:04 vm07 bash[20771]: cluster 2026-03-09T21:20:03.007357+0000 mon.a (mon.0) 977 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-09T21:20:04.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:04 vm07 bash[20771]: cluster 2026-03-09T21:20:03.007357+0000 mon.a (mon.0) 977 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-09T21:20:04.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:04 vm07 bash[20771]: cluster 2026-03-09T21:20:03.763285+0000 mgr.y (mgr.24416) 157 : cluster [DBG] pgmap v203: 164 pgs: 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:04.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:04 vm07 bash[20771]: cluster 2026-03-09T21:20:03.763285+0000 mgr.y (mgr.24416) 157 : cluster [DBG] pgmap v203: 164 pgs: 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:04.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:04 vm07 bash[28052]: cluster 2026-03-09T21:20:03.007357+0000 mon.a (mon.0) 977 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-09T21:20:04.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:04 vm07 bash[28052]: cluster 2026-03-09T21:20:03.007357+0000 mon.a (mon.0) 977 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-09T21:20:04.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:04 vm07 bash[28052]: cluster 2026-03-09T21:20:03.763285+0000 mgr.y (mgr.24416) 157 : cluster [DBG] pgmap v203: 164 pgs: 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:04.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:04 vm07 bash[28052]: cluster 2026-03-09T21:20:03.763285+0000 mgr.y (mgr.24416) 157 : cluster [DBG] pgmap v203: 164 pgs: 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:04.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:04 vm10 bash[23387]: cluster 2026-03-09T21:20:03.007357+0000 mon.a (mon.0) 977 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-09T21:20:04.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:04 vm10 bash[23387]: cluster 2026-03-09T21:20:03.007357+0000 mon.a (mon.0) 977 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-09T21:20:04.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:04 vm10 bash[23387]: cluster 2026-03-09T21:20:03.763285+0000 mgr.y (mgr.24416) 157 : cluster [DBG] pgmap v203: 164 pgs: 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:04.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:04 vm10 bash[23387]: cluster 2026-03-09T21:20:03.763285+0000 mgr.y (mgr.24416) 157 : cluster [DBG] pgmap v203: 164 pgs: 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:05.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:05 vm07 bash[20771]: cluster 2026-03-09T21:20:04.016934+0000 mon.a (mon.0) 978 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:05.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:05 vm07 bash[20771]: cluster 2026-03-09T21:20:04.016934+0000 mon.a (mon.0) 978 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:05.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:05 vm07 bash[20771]: cluster 2026-03-09T21:20:04.043768+0000 mon.a (mon.0) 979 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-09T21:20:05.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:05 vm07 bash[20771]: cluster 2026-03-09T21:20:04.043768+0000 mon.a (mon.0) 979 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-09T21:20:05.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:05 vm07 bash[28052]: cluster 2026-03-09T21:20:04.016934+0000 mon.a (mon.0) 978 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:05.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:05 vm07 bash[28052]: cluster 2026-03-09T21:20:04.016934+0000 mon.a (mon.0) 978 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:05.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:05 vm07 bash[28052]: cluster 2026-03-09T21:20:04.043768+0000 mon.a (mon.0) 979 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-09T21:20:05.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:05 vm07 bash[28052]: cluster 2026-03-09T21:20:04.043768+0000 mon.a (mon.0) 979 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-09T21:20:05.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:05 vm10 bash[23387]: cluster 2026-03-09T21:20:04.016934+0000 mon.a (mon.0) 978 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:05.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:05 vm10 bash[23387]: cluster 2026-03-09T21:20:04.016934+0000 mon.a (mon.0) 978 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:05.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:05 vm10 bash[23387]: cluster 2026-03-09T21:20:04.043768+0000 mon.a (mon.0) 979 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-09T21:20:05.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:05 vm10 bash[23387]: cluster 2026-03-09T21:20:04.043768+0000 mon.a (mon.0) 979 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-09T21:20:06.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:06 vm07 bash[20771]: cluster 2026-03-09T21:20:05.032160+0000 mon.a (mon.0) 980 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-09T21:20:06.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:06 vm07 bash[20771]: cluster 2026-03-09T21:20:05.032160+0000 mon.a (mon.0) 980 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-09T21:20:06.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:06 vm07 bash[20771]: cluster 2026-03-09T21:20:05.763605+0000 mgr.y (mgr.24416) 158 : cluster [DBG] pgmap v206: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:06.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:06 vm07 bash[20771]: cluster 2026-03-09T21:20:05.763605+0000 mgr.y (mgr.24416) 158 : cluster [DBG] pgmap v206: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:06.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:06 vm07 bash[28052]: cluster 2026-03-09T21:20:05.032160+0000 mon.a (mon.0) 980 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-09T21:20:06.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:06 vm07 bash[28052]: cluster 2026-03-09T21:20:05.032160+0000 mon.a (mon.0) 980 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-09T21:20:06.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:06 vm07 bash[28052]: cluster 2026-03-09T21:20:05.763605+0000 mgr.y (mgr.24416) 158 : cluster [DBG] pgmap v206: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:06.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:06 vm07 bash[28052]: cluster 2026-03-09T21:20:05.763605+0000 mgr.y (mgr.24416) 158 : cluster [DBG] pgmap v206: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:06.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:06 vm10 bash[23387]: cluster 2026-03-09T21:20:05.032160+0000 mon.a (mon.0) 980 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-09T21:20:06.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:06 vm10 bash[23387]: cluster 2026-03-09T21:20:05.032160+0000 mon.a (mon.0) 980 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-09T21:20:06.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:06 vm10 bash[23387]: cluster 2026-03-09T21:20:05.763605+0000 mgr.y (mgr.24416) 158 : cluster [DBG] pgmap v206: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:06.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:06 vm10 bash[23387]: cluster 2026-03-09T21:20:05.763605+0000 mgr.y (mgr.24416) 158 : cluster [DBG] pgmap v206: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:06.442 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:20:06 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:20:07.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:07 vm07 bash[20771]: cluster 2026-03-09T21:20:06.079968+0000 mon.a (mon.0) 981 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-09T21:20:07.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:07 vm07 bash[20771]: cluster 2026-03-09T21:20:06.079968+0000 mon.a (mon.0) 981 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-09T21:20:07.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:07 vm07 bash[20771]: audit 2026-03-09T21:20:06.338363+0000 mgr.y (mgr.24416) 159 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:07.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:07 vm07 bash[20771]: audit 2026-03-09T21:20:06.338363+0000 mgr.y (mgr.24416) 159 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:07.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:07 vm07 bash[28052]: cluster 2026-03-09T21:20:06.079968+0000 mon.a (mon.0) 981 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-09T21:20:07.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:07 vm07 bash[28052]: cluster 2026-03-09T21:20:06.079968+0000 mon.a (mon.0) 981 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-09T21:20:07.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:07 vm07 bash[28052]: audit 2026-03-09T21:20:06.338363+0000 mgr.y (mgr.24416) 159 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:07.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:07 vm07 bash[28052]: audit 2026-03-09T21:20:06.338363+0000 mgr.y (mgr.24416) 159 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:07.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:07 vm10 bash[23387]: cluster 2026-03-09T21:20:06.079968+0000 mon.a (mon.0) 981 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-09T21:20:07.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:07 vm10 bash[23387]: cluster 2026-03-09T21:20:06.079968+0000 mon.a (mon.0) 981 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-09T21:20:07.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:07 vm10 bash[23387]: audit 2026-03-09T21:20:06.338363+0000 mgr.y (mgr.24416) 159 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:07.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:07 vm10 bash[23387]: audit 2026-03-09T21:20:06.338363+0000 mgr.y (mgr.24416) 159 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:08.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:08 vm07 bash[20771]: cluster 2026-03-09T21:20:07.184302+0000 mon.a (mon.0) 982 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-09T21:20:08.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:08 vm07 bash[20771]: cluster 2026-03-09T21:20:07.184302+0000 mon.a (mon.0) 982 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-09T21:20:08.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:08 vm07 bash[20771]: audit 2026-03-09T21:20:07.198649+0000 mon.a (mon.0) 983 : audit [INF] from='client.? 192.168.123.107:0/2585911100' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:08.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:08 vm07 bash[20771]: audit 2026-03-09T21:20:07.198649+0000 mon.a (mon.0) 983 : audit [INF] from='client.? 192.168.123.107:0/2585911100' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:08.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:08 vm07 bash[20771]: cluster 2026-03-09T21:20:07.764184+0000 mgr.y (mgr.24416) 160 : cluster [DBG] pgmap v209: 196 pgs: 3 unknown, 193 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:20:08.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:08 vm07 bash[20771]: cluster 2026-03-09T21:20:07.764184+0000 mgr.y (mgr.24416) 160 : cluster [DBG] pgmap v209: 196 pgs: 3 unknown, 193 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:20:08.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:08 vm07 bash[20771]: audit 2026-03-09T21:20:08.144777+0000 mon.a (mon.0) 984 : audit [INF] from='client.? 192.168.123.107:0/2585911100' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:08.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:08 vm07 bash[20771]: audit 2026-03-09T21:20:08.144777+0000 mon.a (mon.0) 984 : audit [INF] from='client.? 192.168.123.107:0/2585911100' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:08.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:08 vm07 bash[20771]: cluster 2026-03-09T21:20:08.153942+0000 mon.a (mon.0) 985 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-09T21:20:08.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:08 vm07 bash[20771]: cluster 2026-03-09T21:20:08.153942+0000 mon.a (mon.0) 985 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-09T21:20:08.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:08 vm07 bash[28052]: cluster 2026-03-09T21:20:07.184302+0000 mon.a (mon.0) 982 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-09T21:20:08.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:08 vm07 bash[28052]: cluster 2026-03-09T21:20:07.184302+0000 mon.a (mon.0) 982 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-09T21:20:08.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:08 vm07 bash[28052]: audit 2026-03-09T21:20:07.198649+0000 mon.a (mon.0) 983 : audit [INF] from='client.? 192.168.123.107:0/2585911100' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:08.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:08 vm07 bash[28052]: audit 2026-03-09T21:20:07.198649+0000 mon.a (mon.0) 983 : audit [INF] from='client.? 192.168.123.107:0/2585911100' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:08.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:08 vm07 bash[28052]: cluster 2026-03-09T21:20:07.764184+0000 mgr.y (mgr.24416) 160 : cluster [DBG] pgmap v209: 196 pgs: 3 unknown, 193 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:20:08.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:08 vm07 bash[28052]: cluster 2026-03-09T21:20:07.764184+0000 mgr.y (mgr.24416) 160 : cluster [DBG] pgmap v209: 196 pgs: 3 unknown, 193 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:20:08.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:08 vm07 bash[28052]: audit 2026-03-09T21:20:08.144777+0000 mon.a (mon.0) 984 : audit [INF] from='client.? 192.168.123.107:0/2585911100' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:08.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:08 vm07 bash[28052]: audit 2026-03-09T21:20:08.144777+0000 mon.a (mon.0) 984 : audit [INF] from='client.? 192.168.123.107:0/2585911100' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:08.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:08 vm07 bash[28052]: cluster 2026-03-09T21:20:08.153942+0000 mon.a (mon.0) 985 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-09T21:20:08.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:08 vm07 bash[28052]: cluster 2026-03-09T21:20:08.153942+0000 mon.a (mon.0) 985 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-09T21:20:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:08 vm10 bash[23387]: cluster 2026-03-09T21:20:07.184302+0000 mon.a (mon.0) 982 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-09T21:20:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:08 vm10 bash[23387]: cluster 2026-03-09T21:20:07.184302+0000 mon.a (mon.0) 982 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-09T21:20:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:08 vm10 bash[23387]: audit 2026-03-09T21:20:07.198649+0000 mon.a (mon.0) 983 : audit [INF] from='client.? 192.168.123.107:0/2585911100' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:08 vm10 bash[23387]: audit 2026-03-09T21:20:07.198649+0000 mon.a (mon.0) 983 : audit [INF] from='client.? 192.168.123.107:0/2585911100' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:08 vm10 bash[23387]: cluster 2026-03-09T21:20:07.764184+0000 mgr.y (mgr.24416) 160 : cluster [DBG] pgmap v209: 196 pgs: 3 unknown, 193 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:20:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:08 vm10 bash[23387]: cluster 2026-03-09T21:20:07.764184+0000 mgr.y (mgr.24416) 160 : cluster [DBG] pgmap v209: 196 pgs: 3 unknown, 193 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:20:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:08 vm10 bash[23387]: audit 2026-03-09T21:20:08.144777+0000 mon.a (mon.0) 984 : audit [INF] from='client.? 192.168.123.107:0/2585911100' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:08 vm10 bash[23387]: audit 2026-03-09T21:20:08.144777+0000 mon.a (mon.0) 984 : audit [INF] from='client.? 192.168.123.107:0/2585911100' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:08 vm10 bash[23387]: cluster 2026-03-09T21:20:08.153942+0000 mon.a (mon.0) 985 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-09T21:20:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:08 vm10 bash[23387]: cluster 2026-03-09T21:20:08.153942+0000 mon.a (mon.0) 985 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-09T21:20:09.115 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:20:08 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:20:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:20:09.169 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_list_snaps PASSED [ 43%] 2026-03-09T21:20:10.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:10 vm10 bash[23387]: cluster 2026-03-09T21:20:09.163459+0000 mon.a (mon.0) 986 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-09T21:20:10.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:10 vm10 bash[23387]: cluster 2026-03-09T21:20:09.163459+0000 mon.a (mon.0) 986 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-09T21:20:10.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:10 vm10 bash[23387]: cluster 2026-03-09T21:20:09.756628+0000 mon.a (mon.0) 987 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:10.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:10 vm10 bash[23387]: cluster 2026-03-09T21:20:09.756628+0000 mon.a (mon.0) 987 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:10.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:10 vm10 bash[23387]: cluster 2026-03-09T21:20:09.764603+0000 mgr.y (mgr.24416) 161 : cluster [DBG] pgmap v212: 164 pgs: 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:10.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:10 vm10 bash[23387]: cluster 2026-03-09T21:20:09.764603+0000 mgr.y (mgr.24416) 161 : cluster [DBG] pgmap v212: 164 pgs: 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:10.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:10 vm07 bash[20771]: cluster 2026-03-09T21:20:09.163459+0000 mon.a (mon.0) 986 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-09T21:20:10.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:10 vm07 bash[20771]: cluster 2026-03-09T21:20:09.163459+0000 mon.a (mon.0) 986 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-09T21:20:10.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:10 vm07 bash[20771]: cluster 2026-03-09T21:20:09.756628+0000 mon.a (mon.0) 987 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:10.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:10 vm07 bash[20771]: cluster 2026-03-09T21:20:09.756628+0000 mon.a (mon.0) 987 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:10.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:10 vm07 bash[20771]: cluster 2026-03-09T21:20:09.764603+0000 mgr.y (mgr.24416) 161 : cluster [DBG] pgmap v212: 164 pgs: 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:10.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:10 vm07 bash[20771]: cluster 2026-03-09T21:20:09.764603+0000 mgr.y (mgr.24416) 161 : cluster [DBG] pgmap v212: 164 pgs: 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:10.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:10 vm07 bash[28052]: cluster 2026-03-09T21:20:09.163459+0000 mon.a (mon.0) 986 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-09T21:20:10.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:10 vm07 bash[28052]: cluster 2026-03-09T21:20:09.163459+0000 mon.a (mon.0) 986 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-09T21:20:10.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:10 vm07 bash[28052]: cluster 2026-03-09T21:20:09.756628+0000 mon.a (mon.0) 987 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:10.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:10 vm07 bash[28052]: cluster 2026-03-09T21:20:09.756628+0000 mon.a (mon.0) 987 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:10.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:10 vm07 bash[28052]: cluster 2026-03-09T21:20:09.764603+0000 mgr.y (mgr.24416) 161 : cluster [DBG] pgmap v212: 164 pgs: 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:10.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:10 vm07 bash[28052]: cluster 2026-03-09T21:20:09.764603+0000 mgr.y (mgr.24416) 161 : cluster [DBG] pgmap v212: 164 pgs: 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:11.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:11 vm10 bash[23387]: cluster 2026-03-09T21:20:10.175614+0000 mon.a (mon.0) 988 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-09T21:20:11.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:11 vm10 bash[23387]: cluster 2026-03-09T21:20:10.175614+0000 mon.a (mon.0) 988 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-09T21:20:11.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:11 vm07 bash[20771]: cluster 2026-03-09T21:20:10.175614+0000 mon.a (mon.0) 988 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-09T21:20:11.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:11 vm07 bash[20771]: cluster 2026-03-09T21:20:10.175614+0000 mon.a (mon.0) 988 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-09T21:20:11.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:11 vm07 bash[28052]: cluster 2026-03-09T21:20:10.175614+0000 mon.a (mon.0) 988 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-09T21:20:11.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:11 vm07 bash[28052]: cluster 2026-03-09T21:20:10.175614+0000 mon.a (mon.0) 988 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-09T21:20:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:12 vm07 bash[20771]: cluster 2026-03-09T21:20:11.172552+0000 mon.a (mon.0) 989 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-09T21:20:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:12 vm07 bash[20771]: cluster 2026-03-09T21:20:11.172552+0000 mon.a (mon.0) 989 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-09T21:20:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:12 vm07 bash[20771]: audit 2026-03-09T21:20:11.175680+0000 mon.b (mon.1) 54 : audit [INF] from='client.? 192.168.123.107:0/1528039992' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:12 vm07 bash[20771]: audit 2026-03-09T21:20:11.175680+0000 mon.b (mon.1) 54 : audit [INF] from='client.? 192.168.123.107:0/1528039992' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:12 vm07 bash[20771]: audit 2026-03-09T21:20:11.179645+0000 mon.a (mon.0) 990 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:12 vm07 bash[20771]: audit 2026-03-09T21:20:11.179645+0000 mon.a (mon.0) 990 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:12 vm07 bash[20771]: cluster 2026-03-09T21:20:11.765025+0000 mgr.y (mgr.24416) 162 : cluster [DBG] pgmap v215: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:12 vm07 bash[20771]: cluster 2026-03-09T21:20:11.765025+0000 mgr.y (mgr.24416) 162 : cluster [DBG] pgmap v215: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:12 vm07 bash[20771]: audit 2026-03-09T21:20:11.979899+0000 mon.c (mon.2) 83 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:20:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:12 vm07 bash[20771]: audit 2026-03-09T21:20:11.979899+0000 mon.c (mon.2) 83 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:20:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:12 vm07 bash[28052]: cluster 2026-03-09T21:20:11.172552+0000 mon.a (mon.0) 989 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-09T21:20:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:12 vm07 bash[28052]: cluster 2026-03-09T21:20:11.172552+0000 mon.a (mon.0) 989 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-09T21:20:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:12 vm07 bash[28052]: audit 2026-03-09T21:20:11.175680+0000 mon.b (mon.1) 54 : audit [INF] from='client.? 192.168.123.107:0/1528039992' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:12 vm07 bash[28052]: audit 2026-03-09T21:20:11.175680+0000 mon.b (mon.1) 54 : audit [INF] from='client.? 192.168.123.107:0/1528039992' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:12 vm07 bash[28052]: audit 2026-03-09T21:20:11.179645+0000 mon.a (mon.0) 990 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:12 vm07 bash[28052]: audit 2026-03-09T21:20:11.179645+0000 mon.a (mon.0) 990 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:12 vm07 bash[28052]: cluster 2026-03-09T21:20:11.765025+0000 mgr.y (mgr.24416) 162 : cluster [DBG] pgmap v215: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:12 vm07 bash[28052]: cluster 2026-03-09T21:20:11.765025+0000 mgr.y (mgr.24416) 162 : cluster [DBG] pgmap v215: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:12 vm07 bash[28052]: audit 2026-03-09T21:20:11.979899+0000 mon.c (mon.2) 83 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:20:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:12 vm07 bash[28052]: audit 2026-03-09T21:20:11.979899+0000 mon.c (mon.2) 83 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:20:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:12 vm10 bash[23387]: cluster 2026-03-09T21:20:11.172552+0000 mon.a (mon.0) 989 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-09T21:20:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:12 vm10 bash[23387]: cluster 2026-03-09T21:20:11.172552+0000 mon.a (mon.0) 989 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-09T21:20:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:12 vm10 bash[23387]: audit 2026-03-09T21:20:11.175680+0000 mon.b (mon.1) 54 : audit [INF] from='client.? 192.168.123.107:0/1528039992' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:12 vm10 bash[23387]: audit 2026-03-09T21:20:11.175680+0000 mon.b (mon.1) 54 : audit [INF] from='client.? 192.168.123.107:0/1528039992' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:12 vm10 bash[23387]: audit 2026-03-09T21:20:11.179645+0000 mon.a (mon.0) 990 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:12 vm10 bash[23387]: audit 2026-03-09T21:20:11.179645+0000 mon.a (mon.0) 990 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:12 vm10 bash[23387]: cluster 2026-03-09T21:20:11.765025+0000 mgr.y (mgr.24416) 162 : cluster [DBG] pgmap v215: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:12 vm10 bash[23387]: cluster 2026-03-09T21:20:11.765025+0000 mgr.y (mgr.24416) 162 : cluster [DBG] pgmap v215: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:12 vm10 bash[23387]: audit 2026-03-09T21:20:11.979899+0000 mon.c (mon.2) 83 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:20:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:12 vm10 bash[23387]: audit 2026-03-09T21:20:11.979899+0000 mon.c (mon.2) 83 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:20:13.287 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_lookup_snap PASSED [ 45%] 2026-03-09T21:20:13.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:13 vm07 bash[20771]: audit 2026-03-09T21:20:12.277289+0000 mon.a (mon.0) 991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:13.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:13 vm07 bash[20771]: audit 2026-03-09T21:20:12.277289+0000 mon.a (mon.0) 991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:13.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:13 vm07 bash[20771]: cluster 2026-03-09T21:20:12.280202+0000 mon.a (mon.0) 992 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-09T21:20:13.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:13 vm07 bash[20771]: cluster 2026-03-09T21:20:12.280202+0000 mon.a (mon.0) 992 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-09T21:20:13.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:13 vm07 bash[20771]: cluster 2026-03-09T21:20:13.283639+0000 mon.a (mon.0) 993 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-09T21:20:13.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:13 vm07 bash[20771]: cluster 2026-03-09T21:20:13.283639+0000 mon.a (mon.0) 993 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-09T21:20:13.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:13 vm07 bash[28052]: audit 2026-03-09T21:20:12.277289+0000 mon.a (mon.0) 991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:13.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:13 vm07 bash[28052]: audit 2026-03-09T21:20:12.277289+0000 mon.a (mon.0) 991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:13.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:13 vm07 bash[28052]: cluster 2026-03-09T21:20:12.280202+0000 mon.a (mon.0) 992 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-09T21:20:13.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:13 vm07 bash[28052]: cluster 2026-03-09T21:20:12.280202+0000 mon.a (mon.0) 992 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-09T21:20:13.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:13 vm07 bash[28052]: cluster 2026-03-09T21:20:13.283639+0000 mon.a (mon.0) 993 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-09T21:20:13.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:13 vm07 bash[28052]: cluster 2026-03-09T21:20:13.283639+0000 mon.a (mon.0) 993 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-09T21:20:13.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:13 vm10 bash[23387]: audit 2026-03-09T21:20:12.277289+0000 mon.a (mon.0) 991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:13.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:13 vm10 bash[23387]: audit 2026-03-09T21:20:12.277289+0000 mon.a (mon.0) 991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:13.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:13 vm10 bash[23387]: cluster 2026-03-09T21:20:12.280202+0000 mon.a (mon.0) 992 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-09T21:20:13.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:13 vm10 bash[23387]: cluster 2026-03-09T21:20:12.280202+0000 mon.a (mon.0) 992 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-09T21:20:13.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:13 vm10 bash[23387]: cluster 2026-03-09T21:20:13.283639+0000 mon.a (mon.0) 993 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-09T21:20:13.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:13 vm10 bash[23387]: cluster 2026-03-09T21:20:13.283639+0000 mon.a (mon.0) 993 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-09T21:20:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:14 vm07 bash[20771]: cluster 2026-03-09T21:20:13.765426+0000 mgr.y (mgr.24416) 163 : cluster [DBG] pgmap v218: 164 pgs: 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:14 vm07 bash[20771]: cluster 2026-03-09T21:20:13.765426+0000 mgr.y (mgr.24416) 163 : cluster [DBG] pgmap v218: 164 pgs: 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:14.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:14 vm07 bash[28052]: cluster 2026-03-09T21:20:13.765426+0000 mgr.y (mgr.24416) 163 : cluster [DBG] pgmap v218: 164 pgs: 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:14.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:14 vm07 bash[28052]: cluster 2026-03-09T21:20:13.765426+0000 mgr.y (mgr.24416) 163 : cluster [DBG] pgmap v218: 164 pgs: 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:14.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:14 vm10 bash[23387]: cluster 2026-03-09T21:20:13.765426+0000 mgr.y (mgr.24416) 163 : cluster [DBG] pgmap v218: 164 pgs: 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:14.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:14 vm10 bash[23387]: cluster 2026-03-09T21:20:13.765426+0000 mgr.y (mgr.24416) 163 : cluster [DBG] pgmap v218: 164 pgs: 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:15.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:15 vm07 bash[20771]: cluster 2026-03-09T21:20:14.309588+0000 mon.a (mon.0) 994 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-09T21:20:15.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:15 vm07 bash[20771]: cluster 2026-03-09T21:20:14.309588+0000 mon.a (mon.0) 994 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-09T21:20:15.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:15 vm07 bash[20771]: cluster 2026-03-09T21:20:14.757358+0000 mon.a (mon.0) 995 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:15.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:15 vm07 bash[20771]: cluster 2026-03-09T21:20:14.757358+0000 mon.a (mon.0) 995 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:15.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:15 vm07 bash[28052]: cluster 2026-03-09T21:20:14.309588+0000 mon.a (mon.0) 994 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-09T21:20:15.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:15 vm07 bash[28052]: cluster 2026-03-09T21:20:14.309588+0000 mon.a (mon.0) 994 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-09T21:20:15.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:15 vm07 bash[28052]: cluster 2026-03-09T21:20:14.757358+0000 mon.a (mon.0) 995 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:15.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:15 vm07 bash[28052]: cluster 2026-03-09T21:20:14.757358+0000 mon.a (mon.0) 995 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:15 vm10 bash[23387]: cluster 2026-03-09T21:20:14.309588+0000 mon.a (mon.0) 994 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-09T21:20:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:15 vm10 bash[23387]: cluster 2026-03-09T21:20:14.309588+0000 mon.a (mon.0) 994 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-09T21:20:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:15 vm10 bash[23387]: cluster 2026-03-09T21:20:14.757358+0000 mon.a (mon.0) 995 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:15 vm10 bash[23387]: cluster 2026-03-09T21:20:14.757358+0000 mon.a (mon.0) 995 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:16 vm07 bash[20771]: cluster 2026-03-09T21:20:15.318796+0000 mon.a (mon.0) 996 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-09T21:20:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:16 vm07 bash[20771]: cluster 2026-03-09T21:20:15.318796+0000 mon.a (mon.0) 996 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-09T21:20:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:16 vm07 bash[20771]: audit 2026-03-09T21:20:15.334013+0000 mon.a (mon.0) 997 : audit [INF] from='client.? 192.168.123.107:0/2705127335' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:16 vm07 bash[20771]: audit 2026-03-09T21:20:15.334013+0000 mon.a (mon.0) 997 : audit [INF] from='client.? 192.168.123.107:0/2705127335' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:16 vm07 bash[20771]: cluster 2026-03-09T21:20:15.765768+0000 mgr.y (mgr.24416) 164 : cluster [DBG] pgmap v221: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:16 vm07 bash[20771]: cluster 2026-03-09T21:20:15.765768+0000 mgr.y (mgr.24416) 164 : cluster [DBG] pgmap v221: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:16 vm07 bash[20771]: audit 2026-03-09T21:20:16.311774+0000 mon.a (mon.0) 998 : audit [INF] from='client.? 192.168.123.107:0/2705127335' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:16 vm07 bash[20771]: audit 2026-03-09T21:20:16.311774+0000 mon.a (mon.0) 998 : audit [INF] from='client.? 192.168.123.107:0/2705127335' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:16 vm07 bash[20771]: cluster 2026-03-09T21:20:16.315918+0000 mon.a (mon.0) 999 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-09T21:20:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:16 vm07 bash[20771]: cluster 2026-03-09T21:20:16.315918+0000 mon.a (mon.0) 999 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-09T21:20:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:16 vm07 bash[28052]: cluster 2026-03-09T21:20:15.318796+0000 mon.a (mon.0) 996 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-09T21:20:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:16 vm07 bash[28052]: cluster 2026-03-09T21:20:15.318796+0000 mon.a (mon.0) 996 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-09T21:20:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:16 vm07 bash[28052]: audit 2026-03-09T21:20:15.334013+0000 mon.a (mon.0) 997 : audit [INF] from='client.? 192.168.123.107:0/2705127335' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:16 vm07 bash[28052]: audit 2026-03-09T21:20:15.334013+0000 mon.a (mon.0) 997 : audit [INF] from='client.? 192.168.123.107:0/2705127335' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:16 vm07 bash[28052]: cluster 2026-03-09T21:20:15.765768+0000 mgr.y (mgr.24416) 164 : cluster [DBG] pgmap v221: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:16 vm07 bash[28052]: cluster 2026-03-09T21:20:15.765768+0000 mgr.y (mgr.24416) 164 : cluster [DBG] pgmap v221: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:16 vm07 bash[28052]: audit 2026-03-09T21:20:16.311774+0000 mon.a (mon.0) 998 : audit [INF] from='client.? 192.168.123.107:0/2705127335' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:16 vm07 bash[28052]: audit 2026-03-09T21:20:16.311774+0000 mon.a (mon.0) 998 : audit [INF] from='client.? 192.168.123.107:0/2705127335' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:16 vm07 bash[28052]: cluster 2026-03-09T21:20:16.315918+0000 mon.a (mon.0) 999 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-09T21:20:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:16 vm07 bash[28052]: cluster 2026-03-09T21:20:16.315918+0000 mon.a (mon.0) 999 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-09T21:20:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:16 vm10 bash[23387]: cluster 2026-03-09T21:20:15.318796+0000 mon.a (mon.0) 996 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-09T21:20:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:16 vm10 bash[23387]: cluster 2026-03-09T21:20:15.318796+0000 mon.a (mon.0) 996 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-09T21:20:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:16 vm10 bash[23387]: audit 2026-03-09T21:20:15.334013+0000 mon.a (mon.0) 997 : audit [INF] from='client.? 192.168.123.107:0/2705127335' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:16 vm10 bash[23387]: audit 2026-03-09T21:20:15.334013+0000 mon.a (mon.0) 997 : audit [INF] from='client.? 192.168.123.107:0/2705127335' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:16 vm10 bash[23387]: cluster 2026-03-09T21:20:15.765768+0000 mgr.y (mgr.24416) 164 : cluster [DBG] pgmap v221: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:16 vm10 bash[23387]: cluster 2026-03-09T21:20:15.765768+0000 mgr.y (mgr.24416) 164 : cluster [DBG] pgmap v221: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:16 vm10 bash[23387]: audit 2026-03-09T21:20:16.311774+0000 mon.a (mon.0) 998 : audit [INF] from='client.? 192.168.123.107:0/2705127335' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:16 vm10 bash[23387]: audit 2026-03-09T21:20:16.311774+0000 mon.a (mon.0) 998 : audit [INF] from='client.? 192.168.123.107:0/2705127335' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:16 vm10 bash[23387]: cluster 2026-03-09T21:20:16.315918+0000 mon.a (mon.0) 999 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-09T21:20:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:16 vm10 bash[23387]: cluster 2026-03-09T21:20:16.315918+0000 mon.a (mon.0) 999 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-09T21:20:16.692 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:20:16 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:20:17.330 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_snap_timestamp PASSED [ 46%] 2026-03-09T21:20:17.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:17 vm07 bash[20771]: audit 2026-03-09T21:20:16.345252+0000 mgr.y (mgr.24416) 165 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:17.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:17 vm07 bash[20771]: audit 2026-03-09T21:20:16.345252+0000 mgr.y (mgr.24416) 165 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:17.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:17 vm07 bash[20771]: cluster 2026-03-09T21:20:17.324466+0000 mon.a (mon.0) 1000 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-09T21:20:17.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:17 vm07 bash[20771]: cluster 2026-03-09T21:20:17.324466+0000 mon.a (mon.0) 1000 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-09T21:20:17.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:17 vm07 bash[28052]: audit 2026-03-09T21:20:16.345252+0000 mgr.y (mgr.24416) 165 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:17.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:17 vm07 bash[28052]: audit 2026-03-09T21:20:16.345252+0000 mgr.y (mgr.24416) 165 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:17.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:17 vm07 bash[28052]: cluster 2026-03-09T21:20:17.324466+0000 mon.a (mon.0) 1000 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-09T21:20:17.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:17 vm07 bash[28052]: cluster 2026-03-09T21:20:17.324466+0000 mon.a (mon.0) 1000 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-09T21:20:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:17 vm10 bash[23387]: audit 2026-03-09T21:20:16.345252+0000 mgr.y (mgr.24416) 165 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:17 vm10 bash[23387]: audit 2026-03-09T21:20:16.345252+0000 mgr.y (mgr.24416) 165 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:17 vm10 bash[23387]: cluster 2026-03-09T21:20:17.324466+0000 mon.a (mon.0) 1000 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-09T21:20:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:17 vm10 bash[23387]: cluster 2026-03-09T21:20:17.324466+0000 mon.a (mon.0) 1000 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-09T21:20:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:18 vm07 bash[20771]: cluster 2026-03-09T21:20:17.766408+0000 mgr.y (mgr.24416) 166 : cluster [DBG] pgmap v224: 164 pgs: 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:20:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:18 vm07 bash[20771]: cluster 2026-03-09T21:20:17.766408+0000 mgr.y (mgr.24416) 166 : cluster [DBG] pgmap v224: 164 pgs: 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:20:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:18 vm07 bash[20771]: cluster 2026-03-09T21:20:18.333324+0000 mon.a (mon.0) 1001 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-09T21:20:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:18 vm07 bash[20771]: cluster 2026-03-09T21:20:18.333324+0000 mon.a (mon.0) 1001 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-09T21:20:18.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:18 vm07 bash[28052]: cluster 2026-03-09T21:20:17.766408+0000 mgr.y (mgr.24416) 166 : cluster [DBG] pgmap v224: 164 pgs: 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:20:18.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:18 vm07 bash[28052]: cluster 2026-03-09T21:20:17.766408+0000 mgr.y (mgr.24416) 166 : cluster [DBG] pgmap v224: 164 pgs: 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:20:18.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:18 vm07 bash[28052]: cluster 2026-03-09T21:20:18.333324+0000 mon.a (mon.0) 1001 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-09T21:20:18.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:18 vm07 bash[28052]: cluster 2026-03-09T21:20:18.333324+0000 mon.a (mon.0) 1001 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-09T21:20:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:18 vm10 bash[23387]: cluster 2026-03-09T21:20:17.766408+0000 mgr.y (mgr.24416) 166 : cluster [DBG] pgmap v224: 164 pgs: 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:20:18.695 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:18 vm10 bash[23387]: cluster 2026-03-09T21:20:17.766408+0000 mgr.y (mgr.24416) 166 : cluster [DBG] pgmap v224: 164 pgs: 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:20:18.695 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:18 vm10 bash[23387]: cluster 2026-03-09T21:20:18.333324+0000 mon.a (mon.0) 1001 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-09T21:20:18.695 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:18 vm10 bash[23387]: cluster 2026-03-09T21:20:18.333324+0000 mon.a (mon.0) 1001 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-09T21:20:19.115 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:20:18 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:20:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:20:20.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:20 vm10 bash[23387]: cluster 2026-03-09T21:20:19.423463+0000 mon.a (mon.0) 1002 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-09T21:20:20.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:20 vm10 bash[23387]: cluster 2026-03-09T21:20:19.423463+0000 mon.a (mon.0) 1002 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-09T21:20:20.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:20 vm10 bash[23387]: cluster 2026-03-09T21:20:19.766684+0000 mgr.y (mgr.24416) 167 : cluster [DBG] pgmap v227: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:20.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:20 vm10 bash[23387]: cluster 2026-03-09T21:20:19.766684+0000 mgr.y (mgr.24416) 167 : cluster [DBG] pgmap v227: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:20.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:20 vm07 bash[20771]: cluster 2026-03-09T21:20:19.423463+0000 mon.a (mon.0) 1002 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-09T21:20:20.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:20 vm07 bash[20771]: cluster 2026-03-09T21:20:19.423463+0000 mon.a (mon.0) 1002 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-09T21:20:20.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:20 vm07 bash[20771]: cluster 2026-03-09T21:20:19.766684+0000 mgr.y (mgr.24416) 167 : cluster [DBG] pgmap v227: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:20.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:20 vm07 bash[20771]: cluster 2026-03-09T21:20:19.766684+0000 mgr.y (mgr.24416) 167 : cluster [DBG] pgmap v227: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:20.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:20 vm07 bash[28052]: cluster 2026-03-09T21:20:19.423463+0000 mon.a (mon.0) 1002 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-09T21:20:20.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:20 vm07 bash[28052]: cluster 2026-03-09T21:20:19.423463+0000 mon.a (mon.0) 1002 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-09T21:20:20.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:20 vm07 bash[28052]: cluster 2026-03-09T21:20:19.766684+0000 mgr.y (mgr.24416) 167 : cluster [DBG] pgmap v227: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:20.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:20 vm07 bash[28052]: cluster 2026-03-09T21:20:19.766684+0000 mgr.y (mgr.24416) 167 : cluster [DBG] pgmap v227: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:21.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:21 vm10 bash[23387]: cluster 2026-03-09T21:20:20.415454+0000 mon.a (mon.0) 1003 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:21.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:21 vm10 bash[23387]: cluster 2026-03-09T21:20:20.415454+0000 mon.a (mon.0) 1003 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:21.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:21 vm10 bash[23387]: cluster 2026-03-09T21:20:20.447757+0000 mon.a (mon.0) 1004 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-09T21:20:21.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:21 vm10 bash[23387]: cluster 2026-03-09T21:20:20.447757+0000 mon.a (mon.0) 1004 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-09T21:20:21.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:21 vm10 bash[23387]: audit 2026-03-09T21:20:20.449141+0000 mon.a (mon.0) 1005 : audit [INF] from='client.? 192.168.123.107:0/2168519884' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:21.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:21 vm10 bash[23387]: audit 2026-03-09T21:20:20.449141+0000 mon.a (mon.0) 1005 : audit [INF] from='client.? 192.168.123.107:0/2168519884' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:22.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:21 vm07 bash[20771]: cluster 2026-03-09T21:20:20.415454+0000 mon.a (mon.0) 1003 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:22.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:21 vm07 bash[20771]: cluster 2026-03-09T21:20:20.415454+0000 mon.a (mon.0) 1003 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:22.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:21 vm07 bash[20771]: cluster 2026-03-09T21:20:20.447757+0000 mon.a (mon.0) 1004 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-09T21:20:22.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:21 vm07 bash[20771]: cluster 2026-03-09T21:20:20.447757+0000 mon.a (mon.0) 1004 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-09T21:20:22.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:21 vm07 bash[20771]: audit 2026-03-09T21:20:20.449141+0000 mon.a (mon.0) 1005 : audit [INF] from='client.? 192.168.123.107:0/2168519884' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:22.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:21 vm07 bash[20771]: audit 2026-03-09T21:20:20.449141+0000 mon.a (mon.0) 1005 : audit [INF] from='client.? 192.168.123.107:0/2168519884' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:22.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:21 vm07 bash[28052]: cluster 2026-03-09T21:20:20.415454+0000 mon.a (mon.0) 1003 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:22.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:21 vm07 bash[28052]: cluster 2026-03-09T21:20:20.415454+0000 mon.a (mon.0) 1003 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:22.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:21 vm07 bash[28052]: cluster 2026-03-09T21:20:20.447757+0000 mon.a (mon.0) 1004 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-09T21:20:22.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:21 vm07 bash[28052]: cluster 2026-03-09T21:20:20.447757+0000 mon.a (mon.0) 1004 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-09T21:20:22.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:21 vm07 bash[28052]: audit 2026-03-09T21:20:20.449141+0000 mon.a (mon.0) 1005 : audit [INF] from='client.? 192.168.123.107:0/2168519884' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:22.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:21 vm07 bash[28052]: audit 2026-03-09T21:20:20.449141+0000 mon.a (mon.0) 1005 : audit [INF] from='client.? 192.168.123.107:0/2168519884' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:22.609 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_remove_snap PASSED [ 47%] 2026-03-09T21:20:22.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:22 vm10 bash[23387]: audit 2026-03-09T21:20:21.593376+0000 mon.a (mon.0) 1006 : audit [INF] from='client.? 192.168.123.107:0/2168519884' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:22.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:22 vm10 bash[23387]: audit 2026-03-09T21:20:21.593376+0000 mon.a (mon.0) 1006 : audit [INF] from='client.? 192.168.123.107:0/2168519884' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:22.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:22 vm10 bash[23387]: cluster 2026-03-09T21:20:21.602764+0000 mon.a (mon.0) 1007 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-09T21:20:22.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:22 vm10 bash[23387]: cluster 2026-03-09T21:20:21.602764+0000 mon.a (mon.0) 1007 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-09T21:20:22.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:22 vm10 bash[23387]: cluster 2026-03-09T21:20:21.766970+0000 mgr.y (mgr.24416) 168 : cluster [DBG] pgmap v230: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:22.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:22 vm10 bash[23387]: cluster 2026-03-09T21:20:21.766970+0000 mgr.y (mgr.24416) 168 : cluster [DBG] pgmap v230: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:22.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:22 vm10 bash[23387]: cluster 2026-03-09T21:20:22.603297+0000 mon.a (mon.0) 1008 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-09T21:20:22.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:22 vm10 bash[23387]: cluster 2026-03-09T21:20:22.603297+0000 mon.a (mon.0) 1008 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-09T21:20:23.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:22 vm07 bash[20771]: audit 2026-03-09T21:20:21.593376+0000 mon.a (mon.0) 1006 : audit [INF] from='client.? 192.168.123.107:0/2168519884' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:23.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:22 vm07 bash[20771]: audit 2026-03-09T21:20:21.593376+0000 mon.a (mon.0) 1006 : audit [INF] from='client.? 192.168.123.107:0/2168519884' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:23.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:22 vm07 bash[20771]: cluster 2026-03-09T21:20:21.602764+0000 mon.a (mon.0) 1007 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-09T21:20:23.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:22 vm07 bash[20771]: cluster 2026-03-09T21:20:21.602764+0000 mon.a (mon.0) 1007 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-09T21:20:23.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:22 vm07 bash[20771]: cluster 2026-03-09T21:20:21.766970+0000 mgr.y (mgr.24416) 168 : cluster [DBG] pgmap v230: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:23.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:22 vm07 bash[20771]: cluster 2026-03-09T21:20:21.766970+0000 mgr.y (mgr.24416) 168 : cluster [DBG] pgmap v230: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:23.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:22 vm07 bash[20771]: cluster 2026-03-09T21:20:22.603297+0000 mon.a (mon.0) 1008 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-09T21:20:23.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:22 vm07 bash[20771]: cluster 2026-03-09T21:20:22.603297+0000 mon.a (mon.0) 1008 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-09T21:20:23.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:22 vm07 bash[28052]: audit 2026-03-09T21:20:21.593376+0000 mon.a (mon.0) 1006 : audit [INF] from='client.? 192.168.123.107:0/2168519884' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:23.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:22 vm07 bash[28052]: audit 2026-03-09T21:20:21.593376+0000 mon.a (mon.0) 1006 : audit [INF] from='client.? 192.168.123.107:0/2168519884' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:23.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:22 vm07 bash[28052]: cluster 2026-03-09T21:20:21.602764+0000 mon.a (mon.0) 1007 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-09T21:20:23.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:22 vm07 bash[28052]: cluster 2026-03-09T21:20:21.602764+0000 mon.a (mon.0) 1007 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-09T21:20:23.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:22 vm07 bash[28052]: cluster 2026-03-09T21:20:21.766970+0000 mgr.y (mgr.24416) 168 : cluster [DBG] pgmap v230: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:23.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:22 vm07 bash[28052]: cluster 2026-03-09T21:20:21.766970+0000 mgr.y (mgr.24416) 168 : cluster [DBG] pgmap v230: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:23.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:22 vm07 bash[28052]: cluster 2026-03-09T21:20:22.603297+0000 mon.a (mon.0) 1008 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-09T21:20:23.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:22 vm07 bash[28052]: cluster 2026-03-09T21:20:22.603297+0000 mon.a (mon.0) 1008 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-09T21:20:24.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:24 vm10 bash[23387]: cluster 2026-03-09T21:20:23.608307+0000 mon.a (mon.0) 1009 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-09T21:20:24.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:24 vm10 bash[23387]: cluster 2026-03-09T21:20:23.608307+0000 mon.a (mon.0) 1009 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-09T21:20:24.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:24 vm10 bash[23387]: cluster 2026-03-09T21:20:23.767341+0000 mgr.y (mgr.24416) 169 : cluster [DBG] pgmap v233: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:24.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:24 vm10 bash[23387]: cluster 2026-03-09T21:20:23.767341+0000 mgr.y (mgr.24416) 169 : cluster [DBG] pgmap v233: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:24.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:24 vm10 bash[23387]: audit 2026-03-09T21:20:24.022142+0000 mon.c (mon.2) 84 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:20:24.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:24 vm10 bash[23387]: audit 2026-03-09T21:20:24.022142+0000 mon.c (mon.2) 84 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:20:24.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:24 vm10 bash[23387]: audit 2026-03-09T21:20:24.346768+0000 mon.a (mon.0) 1010 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:20:24.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:24 vm10 bash[23387]: audit 2026-03-09T21:20:24.346768+0000 mon.a (mon.0) 1010 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:20:24.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:24 vm10 bash[23387]: audit 2026-03-09T21:20:24.360242+0000 mon.a (mon.0) 1011 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:20:24.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:24 vm10 bash[23387]: audit 2026-03-09T21:20:24.360242+0000 mon.a (mon.0) 1011 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:20:25.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:24 vm07 bash[20771]: cluster 2026-03-09T21:20:23.608307+0000 mon.a (mon.0) 1009 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-09T21:20:25.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:24 vm07 bash[20771]: cluster 2026-03-09T21:20:23.608307+0000 mon.a (mon.0) 1009 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-09T21:20:25.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:24 vm07 bash[20771]: cluster 2026-03-09T21:20:23.767341+0000 mgr.y (mgr.24416) 169 : cluster [DBG] pgmap v233: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:25.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:24 vm07 bash[20771]: cluster 2026-03-09T21:20:23.767341+0000 mgr.y (mgr.24416) 169 : cluster [DBG] pgmap v233: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:25.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:24 vm07 bash[20771]: audit 2026-03-09T21:20:24.022142+0000 mon.c (mon.2) 84 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:20:25.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:24 vm07 bash[20771]: audit 2026-03-09T21:20:24.022142+0000 mon.c (mon.2) 84 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:20:25.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:24 vm07 bash[20771]: audit 2026-03-09T21:20:24.346768+0000 mon.a (mon.0) 1010 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:20:25.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:24 vm07 bash[20771]: audit 2026-03-09T21:20:24.346768+0000 mon.a (mon.0) 1010 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:20:25.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:24 vm07 bash[20771]: audit 2026-03-09T21:20:24.360242+0000 mon.a (mon.0) 1011 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:20:25.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:24 vm07 bash[20771]: audit 2026-03-09T21:20:24.360242+0000 mon.a (mon.0) 1011 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:20:25.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:24 vm07 bash[28052]: cluster 2026-03-09T21:20:23.608307+0000 mon.a (mon.0) 1009 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-09T21:20:25.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:24 vm07 bash[28052]: cluster 2026-03-09T21:20:23.608307+0000 mon.a (mon.0) 1009 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-09T21:20:25.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:24 vm07 bash[28052]: cluster 2026-03-09T21:20:23.767341+0000 mgr.y (mgr.24416) 169 : cluster [DBG] pgmap v233: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:25.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:24 vm07 bash[28052]: cluster 2026-03-09T21:20:23.767341+0000 mgr.y (mgr.24416) 169 : cluster [DBG] pgmap v233: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:25.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:24 vm07 bash[28052]: audit 2026-03-09T21:20:24.022142+0000 mon.c (mon.2) 84 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:20:25.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:24 vm07 bash[28052]: audit 2026-03-09T21:20:24.022142+0000 mon.c (mon.2) 84 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:20:25.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:24 vm07 bash[28052]: audit 2026-03-09T21:20:24.346768+0000 mon.a (mon.0) 1010 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:20:25.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:24 vm07 bash[28052]: audit 2026-03-09T21:20:24.346768+0000 mon.a (mon.0) 1010 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:20:25.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:24 vm07 bash[28052]: audit 2026-03-09T21:20:24.360242+0000 mon.a (mon.0) 1011 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:20:25.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:24 vm07 bash[28052]: audit 2026-03-09T21:20:24.360242+0000 mon.a (mon.0) 1011 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:20:25.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:25 vm10 bash[23387]: cluster 2026-03-09T21:20:24.617932+0000 mon.a (mon.0) 1012 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-09T21:20:25.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:25 vm10 bash[23387]: cluster 2026-03-09T21:20:24.617932+0000 mon.a (mon.0) 1012 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-09T21:20:25.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:25 vm10 bash[23387]: audit 2026-03-09T21:20:24.708337+0000 mon.c (mon.2) 85 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:20:25.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:25 vm10 bash[23387]: audit 2026-03-09T21:20:24.708337+0000 mon.c (mon.2) 85 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:20:25.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:25 vm10 bash[23387]: audit 2026-03-09T21:20:24.709406+0000 mon.c (mon.2) 86 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:20:25.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:25 vm10 bash[23387]: audit 2026-03-09T21:20:24.709406+0000 mon.c (mon.2) 86 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:20:25.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:25 vm10 bash[23387]: audit 2026-03-09T21:20:24.714784+0000 mon.a (mon.0) 1013 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:20:25.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:25 vm10 bash[23387]: audit 2026-03-09T21:20:24.714784+0000 mon.a (mon.0) 1013 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:20:25.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:25 vm10 bash[23387]: cluster 2026-03-09T21:20:25.610584+0000 mon.a (mon.0) 1014 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-09T21:20:25.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:25 vm10 bash[23387]: cluster 2026-03-09T21:20:25.610584+0000 mon.a (mon.0) 1014 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-09T21:20:26.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:25 vm07 bash[20771]: cluster 2026-03-09T21:20:24.617932+0000 mon.a (mon.0) 1012 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-09T21:20:26.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:25 vm07 bash[20771]: cluster 2026-03-09T21:20:24.617932+0000 mon.a (mon.0) 1012 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-09T21:20:26.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:25 vm07 bash[20771]: audit 2026-03-09T21:20:24.708337+0000 mon.c (mon.2) 85 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:20:26.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:25 vm07 bash[20771]: audit 2026-03-09T21:20:24.708337+0000 mon.c (mon.2) 85 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:20:26.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:25 vm07 bash[20771]: audit 2026-03-09T21:20:24.709406+0000 mon.c (mon.2) 86 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:20:26.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:25 vm07 bash[20771]: audit 2026-03-09T21:20:24.709406+0000 mon.c (mon.2) 86 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:20:26.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:25 vm07 bash[20771]: audit 2026-03-09T21:20:24.714784+0000 mon.a (mon.0) 1013 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:20:26.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:25 vm07 bash[20771]: audit 2026-03-09T21:20:24.714784+0000 mon.a (mon.0) 1013 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:20:26.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:25 vm07 bash[20771]: cluster 2026-03-09T21:20:25.610584+0000 mon.a (mon.0) 1014 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-09T21:20:26.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:25 vm07 bash[20771]: cluster 2026-03-09T21:20:25.610584+0000 mon.a (mon.0) 1014 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-09T21:20:26.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:25 vm07 bash[28052]: cluster 2026-03-09T21:20:24.617932+0000 mon.a (mon.0) 1012 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-09T21:20:26.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:25 vm07 bash[28052]: cluster 2026-03-09T21:20:24.617932+0000 mon.a (mon.0) 1012 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-09T21:20:26.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:25 vm07 bash[28052]: audit 2026-03-09T21:20:24.708337+0000 mon.c (mon.2) 85 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:20:26.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:25 vm07 bash[28052]: audit 2026-03-09T21:20:24.708337+0000 mon.c (mon.2) 85 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:20:26.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:25 vm07 bash[28052]: audit 2026-03-09T21:20:24.709406+0000 mon.c (mon.2) 86 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:20:26.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:25 vm07 bash[28052]: audit 2026-03-09T21:20:24.709406+0000 mon.c (mon.2) 86 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:20:26.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:25 vm07 bash[28052]: audit 2026-03-09T21:20:24.714784+0000 mon.a (mon.0) 1013 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:20:26.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:25 vm07 bash[28052]: audit 2026-03-09T21:20:24.714784+0000 mon.a (mon.0) 1013 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:20:26.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:25 vm07 bash[28052]: cluster 2026-03-09T21:20:25.610584+0000 mon.a (mon.0) 1014 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-09T21:20:26.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:25 vm07 bash[28052]: cluster 2026-03-09T21:20:25.610584+0000 mon.a (mon.0) 1014 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-09T21:20:26.662 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:20:26 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:20:26.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:26 vm10 bash[23387]: cluster 2026-03-09T21:20:25.767625+0000 mgr.y (mgr.24416) 170 : cluster [DBG] pgmap v236: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:26.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:26 vm10 bash[23387]: cluster 2026-03-09T21:20:25.767625+0000 mgr.y (mgr.24416) 170 : cluster [DBG] pgmap v236: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:27.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:26 vm07 bash[20771]: cluster 2026-03-09T21:20:25.767625+0000 mgr.y (mgr.24416) 170 : cluster [DBG] pgmap v236: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:27.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:26 vm07 bash[20771]: cluster 2026-03-09T21:20:25.767625+0000 mgr.y (mgr.24416) 170 : cluster [DBG] pgmap v236: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:27.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:26 vm07 bash[28052]: cluster 2026-03-09T21:20:25.767625+0000 mgr.y (mgr.24416) 170 : cluster [DBG] pgmap v236: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:27.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:26 vm07 bash[28052]: cluster 2026-03-09T21:20:25.767625+0000 mgr.y (mgr.24416) 170 : cluster [DBG] pgmap v236: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:27.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:27 vm10 bash[23387]: audit 2026-03-09T21:20:26.352178+0000 mgr.y (mgr.24416) 171 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:27.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:27 vm10 bash[23387]: audit 2026-03-09T21:20:26.352178+0000 mgr.y (mgr.24416) 171 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:27.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:27 vm10 bash[23387]: cluster 2026-03-09T21:20:26.651609+0000 mon.a (mon.0) 1015 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-09T21:20:27.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:27 vm10 bash[23387]: cluster 2026-03-09T21:20:26.651609+0000 mon.a (mon.0) 1015 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-09T21:20:27.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:27 vm10 bash[23387]: audit 2026-03-09T21:20:26.666895+0000 mon.c (mon.2) 87 : audit [INF] from='client.? 192.168.123.107:0/316395224' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:27.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:27 vm10 bash[23387]: audit 2026-03-09T21:20:26.666895+0000 mon.c (mon.2) 87 : audit [INF] from='client.? 192.168.123.107:0/316395224' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:27.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:27 vm10 bash[23387]: audit 2026-03-09T21:20:26.667258+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:27.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:27 vm10 bash[23387]: audit 2026-03-09T21:20:26.667258+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:27.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:27 vm10 bash[23387]: audit 2026-03-09T21:20:26.986383+0000 mon.c (mon.2) 88 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:20:27.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:27 vm10 bash[23387]: audit 2026-03-09T21:20:26.986383+0000 mon.c (mon.2) 88 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:20:28.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:27 vm07 bash[20771]: audit 2026-03-09T21:20:26.352178+0000 mgr.y (mgr.24416) 171 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:28.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:27 vm07 bash[20771]: audit 2026-03-09T21:20:26.352178+0000 mgr.y (mgr.24416) 171 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:28.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:27 vm07 bash[20771]: cluster 2026-03-09T21:20:26.651609+0000 mon.a (mon.0) 1015 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-09T21:20:28.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:27 vm07 bash[20771]: cluster 2026-03-09T21:20:26.651609+0000 mon.a (mon.0) 1015 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-09T21:20:28.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:27 vm07 bash[20771]: audit 2026-03-09T21:20:26.666895+0000 mon.c (mon.2) 87 : audit [INF] from='client.? 192.168.123.107:0/316395224' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:28.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:27 vm07 bash[20771]: audit 2026-03-09T21:20:26.666895+0000 mon.c (mon.2) 87 : audit [INF] from='client.? 192.168.123.107:0/316395224' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:28.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:27 vm07 bash[20771]: audit 2026-03-09T21:20:26.667258+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:28.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:27 vm07 bash[20771]: audit 2026-03-09T21:20:26.667258+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:28.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:27 vm07 bash[20771]: audit 2026-03-09T21:20:26.986383+0000 mon.c (mon.2) 88 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:20:28.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:27 vm07 bash[20771]: audit 2026-03-09T21:20:26.986383+0000 mon.c (mon.2) 88 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:20:28.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:27 vm07 bash[28052]: audit 2026-03-09T21:20:26.352178+0000 mgr.y (mgr.24416) 171 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:28.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:27 vm07 bash[28052]: audit 2026-03-09T21:20:26.352178+0000 mgr.y (mgr.24416) 171 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:28.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:27 vm07 bash[28052]: cluster 2026-03-09T21:20:26.651609+0000 mon.a (mon.0) 1015 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-09T21:20:28.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:27 vm07 bash[28052]: cluster 2026-03-09T21:20:26.651609+0000 mon.a (mon.0) 1015 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-09T21:20:28.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:27 vm07 bash[28052]: audit 2026-03-09T21:20:26.666895+0000 mon.c (mon.2) 87 : audit [INF] from='client.? 192.168.123.107:0/316395224' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:28.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:27 vm07 bash[28052]: audit 2026-03-09T21:20:26.666895+0000 mon.c (mon.2) 87 : audit [INF] from='client.? 192.168.123.107:0/316395224' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:28.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:27 vm07 bash[28052]: audit 2026-03-09T21:20:26.667258+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:28.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:27 vm07 bash[28052]: audit 2026-03-09T21:20:26.667258+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:28.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:27 vm07 bash[28052]: audit 2026-03-09T21:20:26.986383+0000 mon.c (mon.2) 88 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:20:28.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:27 vm07 bash[28052]: audit 2026-03-09T21:20:26.986383+0000 mon.c (mon.2) 88 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:20:28.690 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_snap_rollback PASSED [ 48%] 2026-03-09T21:20:29.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:28 vm07 bash[20771]: audit 2026-03-09T21:20:27.678625+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:29.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:28 vm07 bash[20771]: audit 2026-03-09T21:20:27.678625+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:29.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:28 vm07 bash[20771]: cluster 2026-03-09T21:20:27.685060+0000 mon.a (mon.0) 1018 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-09T21:20:29.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:28 vm07 bash[20771]: cluster 2026-03-09T21:20:27.685060+0000 mon.a (mon.0) 1018 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-09T21:20:29.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:28 vm07 bash[20771]: cluster 2026-03-09T21:20:27.768015+0000 mgr.y (mgr.24416) 172 : cluster [DBG] pgmap v239: 196 pgs: 1 active+clean+snaptrim, 3 unknown, 192 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-09T21:20:29.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:28 vm07 bash[20771]: cluster 2026-03-09T21:20:27.768015+0000 mgr.y (mgr.24416) 172 : cluster [DBG] pgmap v239: 196 pgs: 1 active+clean+snaptrim, 3 unknown, 192 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-09T21:20:29.115 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:20:28 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:20:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:20:29.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:28 vm07 bash[28052]: audit 2026-03-09T21:20:27.678625+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:29.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:28 vm07 bash[28052]: audit 2026-03-09T21:20:27.678625+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:29.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:28 vm07 bash[28052]: cluster 2026-03-09T21:20:27.685060+0000 mon.a (mon.0) 1018 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-09T21:20:29.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:28 vm07 bash[28052]: cluster 2026-03-09T21:20:27.685060+0000 mon.a (mon.0) 1018 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-09T21:20:29.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:28 vm07 bash[28052]: cluster 2026-03-09T21:20:27.768015+0000 mgr.y (mgr.24416) 172 : cluster [DBG] pgmap v239: 196 pgs: 1 active+clean+snaptrim, 3 unknown, 192 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-09T21:20:29.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:28 vm07 bash[28052]: cluster 2026-03-09T21:20:27.768015+0000 mgr.y (mgr.24416) 172 : cluster [DBG] pgmap v239: 196 pgs: 1 active+clean+snaptrim, 3 unknown, 192 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-09T21:20:29.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:28 vm10 bash[23387]: audit 2026-03-09T21:20:27.678625+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:29.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:28 vm10 bash[23387]: audit 2026-03-09T21:20:27.678625+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:29.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:28 vm10 bash[23387]: cluster 2026-03-09T21:20:27.685060+0000 mon.a (mon.0) 1018 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-09T21:20:29.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:28 vm10 bash[23387]: cluster 2026-03-09T21:20:27.685060+0000 mon.a (mon.0) 1018 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-09T21:20:29.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:28 vm10 bash[23387]: cluster 2026-03-09T21:20:27.768015+0000 mgr.y (mgr.24416) 172 : cluster [DBG] pgmap v239: 196 pgs: 1 active+clean+snaptrim, 3 unknown, 192 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-09T21:20:29.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:28 vm10 bash[23387]: cluster 2026-03-09T21:20:27.768015+0000 mgr.y (mgr.24416) 172 : cluster [DBG] pgmap v239: 196 pgs: 1 active+clean+snaptrim, 3 unknown, 192 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-09T21:20:30.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:29 vm07 bash[20771]: cluster 2026-03-09T21:20:28.684798+0000 mon.a (mon.0) 1019 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-09T21:20:30.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:29 vm07 bash[20771]: cluster 2026-03-09T21:20:28.684798+0000 mon.a (mon.0) 1019 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-09T21:20:30.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:29 vm07 bash[28052]: cluster 2026-03-09T21:20:28.684798+0000 mon.a (mon.0) 1019 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-09T21:20:30.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:29 vm07 bash[28052]: cluster 2026-03-09T21:20:28.684798+0000 mon.a (mon.0) 1019 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-09T21:20:30.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:29 vm10 bash[23387]: cluster 2026-03-09T21:20:28.684798+0000 mon.a (mon.0) 1019 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-09T21:20:30.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:29 vm10 bash[23387]: cluster 2026-03-09T21:20:28.684798+0000 mon.a (mon.0) 1019 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-09T21:20:31.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:30 vm07 bash[20771]: cluster 2026-03-09T21:20:29.722675+0000 mon.a (mon.0) 1020 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-09T21:20:31.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:30 vm07 bash[20771]: cluster 2026-03-09T21:20:29.722675+0000 mon.a (mon.0) 1020 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-09T21:20:31.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:30 vm07 bash[20771]: cluster 2026-03-09T21:20:29.768356+0000 mgr.y (mgr.24416) 173 : cluster [DBG] pgmap v242: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:31.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:30 vm07 bash[20771]: cluster 2026-03-09T21:20:29.768356+0000 mgr.y (mgr.24416) 173 : cluster [DBG] pgmap v242: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:31.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:30 vm07 bash[20771]: cluster 2026-03-09T21:20:30.712268+0000 mon.a (mon.0) 1021 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-09T21:20:31.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:30 vm07 bash[20771]: cluster 2026-03-09T21:20:30.712268+0000 mon.a (mon.0) 1021 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-09T21:20:31.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:30 vm07 bash[28052]: cluster 2026-03-09T21:20:29.722675+0000 mon.a (mon.0) 1020 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-09T21:20:31.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:30 vm07 bash[28052]: cluster 2026-03-09T21:20:29.722675+0000 mon.a (mon.0) 1020 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-09T21:20:31.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:30 vm07 bash[28052]: cluster 2026-03-09T21:20:29.768356+0000 mgr.y (mgr.24416) 173 : cluster [DBG] pgmap v242: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:31.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:30 vm07 bash[28052]: cluster 2026-03-09T21:20:29.768356+0000 mgr.y (mgr.24416) 173 : cluster [DBG] pgmap v242: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:31.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:30 vm07 bash[28052]: cluster 2026-03-09T21:20:30.712268+0000 mon.a (mon.0) 1021 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-09T21:20:31.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:30 vm07 bash[28052]: cluster 2026-03-09T21:20:30.712268+0000 mon.a (mon.0) 1021 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-09T21:20:31.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:30 vm10 bash[23387]: cluster 2026-03-09T21:20:29.722675+0000 mon.a (mon.0) 1020 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-09T21:20:31.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:30 vm10 bash[23387]: cluster 2026-03-09T21:20:29.722675+0000 mon.a (mon.0) 1020 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-09T21:20:31.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:30 vm10 bash[23387]: cluster 2026-03-09T21:20:29.768356+0000 mgr.y (mgr.24416) 173 : cluster [DBG] pgmap v242: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:31.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:30 vm10 bash[23387]: cluster 2026-03-09T21:20:29.768356+0000 mgr.y (mgr.24416) 173 : cluster [DBG] pgmap v242: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:31.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:30 vm10 bash[23387]: cluster 2026-03-09T21:20:30.712268+0000 mon.a (mon.0) 1021 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-09T21:20:31.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:30 vm10 bash[23387]: cluster 2026-03-09T21:20:30.712268+0000 mon.a (mon.0) 1021 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-09T21:20:33.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:32 vm07 bash[20771]: cluster 2026-03-09T21:20:31.768705+0000 mgr.y (mgr.24416) 174 : cluster [DBG] pgmap v244: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:33.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:32 vm07 bash[20771]: cluster 2026-03-09T21:20:31.768705+0000 mgr.y (mgr.24416) 174 : cluster [DBG] pgmap v244: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:33.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:32 vm07 bash[20771]: cluster 2026-03-09T21:20:31.796607+0000 mon.a (mon.0) 1022 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-09T21:20:33.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:32 vm07 bash[20771]: cluster 2026-03-09T21:20:31.796607+0000 mon.a (mon.0) 1022 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-09T21:20:33.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:32 vm07 bash[28052]: cluster 2026-03-09T21:20:31.768705+0000 mgr.y (mgr.24416) 174 : cluster [DBG] pgmap v244: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:33.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:32 vm07 bash[28052]: cluster 2026-03-09T21:20:31.768705+0000 mgr.y (mgr.24416) 174 : cluster [DBG] pgmap v244: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:33.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:32 vm07 bash[28052]: cluster 2026-03-09T21:20:31.796607+0000 mon.a (mon.0) 1022 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-09T21:20:33.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:32 vm07 bash[28052]: cluster 2026-03-09T21:20:31.796607+0000 mon.a (mon.0) 1022 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-09T21:20:33.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:32 vm10 bash[23387]: cluster 2026-03-09T21:20:31.768705+0000 mgr.y (mgr.24416) 174 : cluster [DBG] pgmap v244: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:33.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:32 vm10 bash[23387]: cluster 2026-03-09T21:20:31.768705+0000 mgr.y (mgr.24416) 174 : cluster [DBG] pgmap v244: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:33.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:32 vm10 bash[23387]: cluster 2026-03-09T21:20:31.796607+0000 mon.a (mon.0) 1022 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-09T21:20:33.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:32 vm10 bash[23387]: cluster 2026-03-09T21:20:31.796607+0000 mon.a (mon.0) 1022 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-09T21:20:34.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:33 vm07 bash[20771]: cluster 2026-03-09T21:20:32.821531+0000 mon.a (mon.0) 1023 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-09T21:20:34.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:33 vm07 bash[20771]: cluster 2026-03-09T21:20:32.821531+0000 mon.a (mon.0) 1023 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-09T21:20:34.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:33 vm07 bash[20771]: audit 2026-03-09T21:20:32.838864+0000 mon.a (mon.0) 1024 : audit [INF] from='client.? 192.168.123.107:0/1525439136' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:34.116 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:33 vm07 bash[20771]: audit 2026-03-09T21:20:32.838864+0000 mon.a (mon.0) 1024 : audit [INF] from='client.? 192.168.123.107:0/1525439136' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:34.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:33 vm07 bash[28052]: cluster 2026-03-09T21:20:32.821531+0000 mon.a (mon.0) 1023 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-09T21:20:34.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:33 vm07 bash[28052]: cluster 2026-03-09T21:20:32.821531+0000 mon.a (mon.0) 1023 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-09T21:20:34.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:33 vm07 bash[28052]: audit 2026-03-09T21:20:32.838864+0000 mon.a (mon.0) 1024 : audit [INF] from='client.? 192.168.123.107:0/1525439136' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:34.116 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:33 vm07 bash[28052]: audit 2026-03-09T21:20:32.838864+0000 mon.a (mon.0) 1024 : audit [INF] from='client.? 192.168.123.107:0/1525439136' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:34.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:33 vm10 bash[23387]: cluster 2026-03-09T21:20:32.821531+0000 mon.a (mon.0) 1023 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-09T21:20:34.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:33 vm10 bash[23387]: cluster 2026-03-09T21:20:32.821531+0000 mon.a (mon.0) 1023 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-09T21:20:34.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:33 vm10 bash[23387]: audit 2026-03-09T21:20:32.838864+0000 mon.a (mon.0) 1024 : audit [INF] from='client.? 192.168.123.107:0/1525439136' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:34.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:33 vm10 bash[23387]: audit 2026-03-09T21:20:32.838864+0000 mon.a (mon.0) 1024 : audit [INF] from='client.? 192.168.123.107:0/1525439136' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:34.833 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_snap_rollback_removed PASSED [ 49%] 2026-03-09T21:20:35.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:34 vm07 bash[20771]: cluster 2026-03-09T21:20:33.768993+0000 mgr.y (mgr.24416) 175 : cluster [DBG] pgmap v247: 196 pgs: 196 active+clean; 455 KiB data, 367 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 504 B/s wr, 1 op/s 2026-03-09T21:20:35.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:34 vm07 bash[20771]: cluster 2026-03-09T21:20:33.768993+0000 mgr.y (mgr.24416) 175 : cluster [DBG] pgmap v247: 196 pgs: 196 active+clean; 455 KiB data, 367 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 504 B/s wr, 1 op/s 2026-03-09T21:20:35.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:34 vm07 bash[20771]: audit 2026-03-09T21:20:33.821433+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? 192.168.123.107:0/1525439136' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:35.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:34 vm07 bash[20771]: audit 2026-03-09T21:20:33.821433+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? 192.168.123.107:0/1525439136' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:35.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:34 vm07 bash[20771]: cluster 2026-03-09T21:20:33.825280+0000 mon.a (mon.0) 1026 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-09T21:20:35.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:34 vm07 bash[20771]: cluster 2026-03-09T21:20:33.825280+0000 mon.a (mon.0) 1026 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-09T21:20:35.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:34 vm07 bash[28052]: cluster 2026-03-09T21:20:33.768993+0000 mgr.y (mgr.24416) 175 : cluster [DBG] pgmap v247: 196 pgs: 196 active+clean; 455 KiB data, 367 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 504 B/s wr, 1 op/s 2026-03-09T21:20:35.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:34 vm07 bash[28052]: cluster 2026-03-09T21:20:33.768993+0000 mgr.y (mgr.24416) 175 : cluster [DBG] pgmap v247: 196 pgs: 196 active+clean; 455 KiB data, 367 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 504 B/s wr, 1 op/s 2026-03-09T21:20:35.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:34 vm07 bash[28052]: audit 2026-03-09T21:20:33.821433+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? 192.168.123.107:0/1525439136' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:35.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:34 vm07 bash[28052]: audit 2026-03-09T21:20:33.821433+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? 192.168.123.107:0/1525439136' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:35.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:34 vm07 bash[28052]: cluster 2026-03-09T21:20:33.825280+0000 mon.a (mon.0) 1026 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-09T21:20:35.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:34 vm07 bash[28052]: cluster 2026-03-09T21:20:33.825280+0000 mon.a (mon.0) 1026 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-09T21:20:35.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:34 vm10 bash[23387]: cluster 2026-03-09T21:20:33.768993+0000 mgr.y (mgr.24416) 175 : cluster [DBG] pgmap v247: 196 pgs: 196 active+clean; 455 KiB data, 367 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 504 B/s wr, 1 op/s 2026-03-09T21:20:35.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:34 vm10 bash[23387]: cluster 2026-03-09T21:20:33.768993+0000 mgr.y (mgr.24416) 175 : cluster [DBG] pgmap v247: 196 pgs: 196 active+clean; 455 KiB data, 367 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 504 B/s wr, 1 op/s 2026-03-09T21:20:35.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:34 vm10 bash[23387]: audit 2026-03-09T21:20:33.821433+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? 192.168.123.107:0/1525439136' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:35.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:34 vm10 bash[23387]: audit 2026-03-09T21:20:33.821433+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? 192.168.123.107:0/1525439136' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:35.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:34 vm10 bash[23387]: cluster 2026-03-09T21:20:33.825280+0000 mon.a (mon.0) 1026 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-09T21:20:35.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:34 vm10 bash[23387]: cluster 2026-03-09T21:20:33.825280+0000 mon.a (mon.0) 1026 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-09T21:20:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:35 vm10 bash[23387]: cluster 2026-03-09T21:20:34.827433+0000 mon.a (mon.0) 1027 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-09T21:20:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:35 vm10 bash[23387]: cluster 2026-03-09T21:20:34.827433+0000 mon.a (mon.0) 1027 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-09T21:20:36.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:35 vm07 bash[20771]: cluster 2026-03-09T21:20:34.827433+0000 mon.a (mon.0) 1027 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-09T21:20:36.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:35 vm07 bash[20771]: cluster 2026-03-09T21:20:34.827433+0000 mon.a (mon.0) 1027 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-09T21:20:36.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:35 vm07 bash[28052]: cluster 2026-03-09T21:20:34.827433+0000 mon.a (mon.0) 1027 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-09T21:20:36.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:35 vm07 bash[28052]: cluster 2026-03-09T21:20:34.827433+0000 mon.a (mon.0) 1027 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-09T21:20:36.692 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:20:36 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:20:37.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:36 vm10 bash[23387]: cluster 2026-03-09T21:20:35.769272+0000 mgr.y (mgr.24416) 176 : cluster [DBG] pgmap v250: 164 pgs: 164 active+clean; 455 KiB data, 367 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:37.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:36 vm10 bash[23387]: cluster 2026-03-09T21:20:35.769272+0000 mgr.y (mgr.24416) 176 : cluster [DBG] pgmap v250: 164 pgs: 164 active+clean; 455 KiB data, 367 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:37.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:36 vm10 bash[23387]: cluster 2026-03-09T21:20:35.845791+0000 mon.a (mon.0) 1028 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:37.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:36 vm10 bash[23387]: cluster 2026-03-09T21:20:35.845791+0000 mon.a (mon.0) 1028 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:37.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:36 vm10 bash[23387]: cluster 2026-03-09T21:20:35.900711+0000 mon.a (mon.0) 1029 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-09T21:20:37.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:36 vm10 bash[23387]: cluster 2026-03-09T21:20:35.900711+0000 mon.a (mon.0) 1029 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-09T21:20:37.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:36 vm07 bash[20771]: cluster 2026-03-09T21:20:35.769272+0000 mgr.y (mgr.24416) 176 : cluster [DBG] pgmap v250: 164 pgs: 164 active+clean; 455 KiB data, 367 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:37.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:36 vm07 bash[20771]: cluster 2026-03-09T21:20:35.769272+0000 mgr.y (mgr.24416) 176 : cluster [DBG] pgmap v250: 164 pgs: 164 active+clean; 455 KiB data, 367 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:37.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:36 vm07 bash[20771]: cluster 2026-03-09T21:20:35.845791+0000 mon.a (mon.0) 1028 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:37.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:36 vm07 bash[20771]: cluster 2026-03-09T21:20:35.845791+0000 mon.a (mon.0) 1028 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:37.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:36 vm07 bash[20771]: cluster 2026-03-09T21:20:35.900711+0000 mon.a (mon.0) 1029 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-09T21:20:37.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:36 vm07 bash[20771]: cluster 2026-03-09T21:20:35.900711+0000 mon.a (mon.0) 1029 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-09T21:20:37.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:36 vm07 bash[28052]: cluster 2026-03-09T21:20:35.769272+0000 mgr.y (mgr.24416) 176 : cluster [DBG] pgmap v250: 164 pgs: 164 active+clean; 455 KiB data, 367 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:37.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:36 vm07 bash[28052]: cluster 2026-03-09T21:20:35.769272+0000 mgr.y (mgr.24416) 176 : cluster [DBG] pgmap v250: 164 pgs: 164 active+clean; 455 KiB data, 367 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:37.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:36 vm07 bash[28052]: cluster 2026-03-09T21:20:35.845791+0000 mon.a (mon.0) 1028 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:37.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:36 vm07 bash[28052]: cluster 2026-03-09T21:20:35.845791+0000 mon.a (mon.0) 1028 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:37.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:36 vm07 bash[28052]: cluster 2026-03-09T21:20:35.900711+0000 mon.a (mon.0) 1029 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-09T21:20:37.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:36 vm07 bash[28052]: cluster 2026-03-09T21:20:35.900711+0000 mon.a (mon.0) 1029 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-09T21:20:38.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:37 vm10 bash[23387]: audit 2026-03-09T21:20:36.363109+0000 mgr.y (mgr.24416) 177 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:38.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:37 vm10 bash[23387]: audit 2026-03-09T21:20:36.363109+0000 mgr.y (mgr.24416) 177 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:38.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:37 vm10 bash[23387]: cluster 2026-03-09T21:20:36.911708+0000 mon.a (mon.0) 1030 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-09T21:20:38.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:37 vm10 bash[23387]: cluster 2026-03-09T21:20:36.911708+0000 mon.a (mon.0) 1030 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-09T21:20:38.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:37 vm07 bash[20771]: audit 2026-03-09T21:20:36.363109+0000 mgr.y (mgr.24416) 177 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:38.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:37 vm07 bash[20771]: audit 2026-03-09T21:20:36.363109+0000 mgr.y (mgr.24416) 177 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:38.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:37 vm07 bash[20771]: cluster 2026-03-09T21:20:36.911708+0000 mon.a (mon.0) 1030 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-09T21:20:38.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:37 vm07 bash[20771]: cluster 2026-03-09T21:20:36.911708+0000 mon.a (mon.0) 1030 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-09T21:20:38.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:37 vm07 bash[28052]: audit 2026-03-09T21:20:36.363109+0000 mgr.y (mgr.24416) 177 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:38.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:37 vm07 bash[28052]: audit 2026-03-09T21:20:36.363109+0000 mgr.y (mgr.24416) 177 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:38.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:37 vm07 bash[28052]: cluster 2026-03-09T21:20:36.911708+0000 mon.a (mon.0) 1030 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-09T21:20:38.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:37 vm07 bash[28052]: cluster 2026-03-09T21:20:36.911708+0000 mon.a (mon.0) 1030 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-09T21:20:38.902 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:20:38 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:20:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:20:39.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:38 vm10 bash[23387]: cluster 2026-03-09T21:20:37.769842+0000 mgr.y (mgr.24416) 178 : cluster [DBG] pgmap v253: 196 pgs: 5 unknown, 191 active+clean; 455 KiB data, 368 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:20:39.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:38 vm10 bash[23387]: cluster 2026-03-09T21:20:37.769842+0000 mgr.y (mgr.24416) 178 : cluster [DBG] pgmap v253: 196 pgs: 5 unknown, 191 active+clean; 455 KiB data, 368 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:20:39.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:38 vm10 bash[23387]: cluster 2026-03-09T21:20:37.913104+0000 mon.a (mon.0) 1031 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-09T21:20:39.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:38 vm10 bash[23387]: cluster 2026-03-09T21:20:37.913104+0000 mon.a (mon.0) 1031 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-09T21:20:39.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:38 vm07 bash[20771]: cluster 2026-03-09T21:20:37.769842+0000 mgr.y (mgr.24416) 178 : cluster [DBG] pgmap v253: 196 pgs: 5 unknown, 191 active+clean; 455 KiB data, 368 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:20:39.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:38 vm07 bash[20771]: cluster 2026-03-09T21:20:37.769842+0000 mgr.y (mgr.24416) 178 : cluster [DBG] pgmap v253: 196 pgs: 5 unknown, 191 active+clean; 455 KiB data, 368 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:20:39.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:38 vm07 bash[20771]: cluster 2026-03-09T21:20:37.913104+0000 mon.a (mon.0) 1031 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-09T21:20:39.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:38 vm07 bash[20771]: cluster 2026-03-09T21:20:37.913104+0000 mon.a (mon.0) 1031 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-09T21:20:39.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:38 vm07 bash[28052]: cluster 2026-03-09T21:20:37.769842+0000 mgr.y (mgr.24416) 178 : cluster [DBG] pgmap v253: 196 pgs: 5 unknown, 191 active+clean; 455 KiB data, 368 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:20:39.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:38 vm07 bash[28052]: cluster 2026-03-09T21:20:37.769842+0000 mgr.y (mgr.24416) 178 : cluster [DBG] pgmap v253: 196 pgs: 5 unknown, 191 active+clean; 455 KiB data, 368 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:20:39.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:38 vm07 bash[28052]: cluster 2026-03-09T21:20:37.913104+0000 mon.a (mon.0) 1031 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-09T21:20:39.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:38 vm07 bash[28052]: cluster 2026-03-09T21:20:37.913104+0000 mon.a (mon.0) 1031 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-09T21:20:40.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:39 vm10 bash[23387]: cluster 2026-03-09T21:20:38.922759+0000 mon.a (mon.0) 1032 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-09T21:20:40.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:39 vm10 bash[23387]: cluster 2026-03-09T21:20:38.922759+0000 mon.a (mon.0) 1032 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-09T21:20:40.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:39 vm10 bash[23387]: audit 2026-03-09T21:20:38.939550+0000 mon.a (mon.0) 1033 : audit [INF] from='client.? 192.168.123.107:0/3596221432' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:40.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:39 vm10 bash[23387]: audit 2026-03-09T21:20:38.939550+0000 mon.a (mon.0) 1033 : audit [INF] from='client.? 192.168.123.107:0/3596221432' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:39 vm07 bash[20771]: cluster 2026-03-09T21:20:38.922759+0000 mon.a (mon.0) 1032 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-09T21:20:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:39 vm07 bash[20771]: cluster 2026-03-09T21:20:38.922759+0000 mon.a (mon.0) 1032 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-09T21:20:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:39 vm07 bash[20771]: audit 2026-03-09T21:20:38.939550+0000 mon.a (mon.0) 1033 : audit [INF] from='client.? 192.168.123.107:0/3596221432' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:39 vm07 bash[20771]: audit 2026-03-09T21:20:38.939550+0000 mon.a (mon.0) 1033 : audit [INF] from='client.? 192.168.123.107:0/3596221432' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:39 vm07 bash[28052]: cluster 2026-03-09T21:20:38.922759+0000 mon.a (mon.0) 1032 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-09T21:20:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:39 vm07 bash[28052]: cluster 2026-03-09T21:20:38.922759+0000 mon.a (mon.0) 1032 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-09T21:20:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:39 vm07 bash[28052]: audit 2026-03-09T21:20:38.939550+0000 mon.a (mon.0) 1033 : audit [INF] from='client.? 192.168.123.107:0/3596221432' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:39 vm07 bash[28052]: audit 2026-03-09T21:20:38.939550+0000 mon.a (mon.0) 1033 : audit [INF] from='client.? 192.168.123.107:0/3596221432' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:40.936 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_snap_read PASSED [ 50%] 2026-03-09T21:20:41.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:40 vm07 bash[20771]: cluster 2026-03-09T21:20:39.770215+0000 mgr.y (mgr.24416) 179 : cluster [DBG] pgmap v256: 196 pgs: 196 active+clean; 455 KiB data, 368 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:20:41.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:40 vm07 bash[20771]: cluster 2026-03-09T21:20:39.770215+0000 mgr.y (mgr.24416) 179 : cluster [DBG] pgmap v256: 196 pgs: 196 active+clean; 455 KiB data, 368 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:20:41.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:40 vm07 bash[20771]: audit 2026-03-09T21:20:39.923488+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? 192.168.123.107:0/3596221432' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:41.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:40 vm07 bash[20771]: audit 2026-03-09T21:20:39.923488+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? 192.168.123.107:0/3596221432' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:41.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:40 vm07 bash[20771]: cluster 2026-03-09T21:20:39.927782+0000 mon.a (mon.0) 1035 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-09T21:20:41.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:40 vm07 bash[20771]: cluster 2026-03-09T21:20:39.927782+0000 mon.a (mon.0) 1035 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-09T21:20:41.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:40 vm07 bash[28052]: cluster 2026-03-09T21:20:39.770215+0000 mgr.y (mgr.24416) 179 : cluster [DBG] pgmap v256: 196 pgs: 196 active+clean; 455 KiB data, 368 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:20:41.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:40 vm07 bash[28052]: cluster 2026-03-09T21:20:39.770215+0000 mgr.y (mgr.24416) 179 : cluster [DBG] pgmap v256: 196 pgs: 196 active+clean; 455 KiB data, 368 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:20:41.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:40 vm07 bash[28052]: audit 2026-03-09T21:20:39.923488+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? 192.168.123.107:0/3596221432' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:41.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:40 vm07 bash[28052]: audit 2026-03-09T21:20:39.923488+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? 192.168.123.107:0/3596221432' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:41.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:40 vm07 bash[28052]: cluster 2026-03-09T21:20:39.927782+0000 mon.a (mon.0) 1035 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-09T21:20:41.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:40 vm07 bash[28052]: cluster 2026-03-09T21:20:39.927782+0000 mon.a (mon.0) 1035 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-09T21:20:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:40 vm10 bash[23387]: cluster 2026-03-09T21:20:39.770215+0000 mgr.y (mgr.24416) 179 : cluster [DBG] pgmap v256: 196 pgs: 196 active+clean; 455 KiB data, 368 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:20:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:40 vm10 bash[23387]: cluster 2026-03-09T21:20:39.770215+0000 mgr.y (mgr.24416) 179 : cluster [DBG] pgmap v256: 196 pgs: 196 active+clean; 455 KiB data, 368 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:20:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:40 vm10 bash[23387]: audit 2026-03-09T21:20:39.923488+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? 192.168.123.107:0/3596221432' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:40 vm10 bash[23387]: audit 2026-03-09T21:20:39.923488+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? 192.168.123.107:0/3596221432' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:40 vm10 bash[23387]: cluster 2026-03-09T21:20:39.927782+0000 mon.a (mon.0) 1035 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-09T21:20:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:40 vm10 bash[23387]: cluster 2026-03-09T21:20:39.927782+0000 mon.a (mon.0) 1035 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-09T21:20:42.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:41 vm07 bash[20771]: cluster 2026-03-09T21:20:40.933084+0000 mon.a (mon.0) 1036 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-09T21:20:42.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:41 vm07 bash[20771]: cluster 2026-03-09T21:20:40.933084+0000 mon.a (mon.0) 1036 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-09T21:20:42.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:41 vm07 bash[20771]: cluster 2026-03-09T21:20:41.770494+0000 mgr.y (mgr.24416) 180 : cluster [DBG] pgmap v259: 164 pgs: 164 active+clean; 455 KiB data, 368 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:42.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:41 vm07 bash[20771]: cluster 2026-03-09T21:20:41.770494+0000 mgr.y (mgr.24416) 180 : cluster [DBG] pgmap v259: 164 pgs: 164 active+clean; 455 KiB data, 368 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:42.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:41 vm07 bash[28052]: cluster 2026-03-09T21:20:40.933084+0000 mon.a (mon.0) 1036 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-09T21:20:42.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:41 vm07 bash[28052]: cluster 2026-03-09T21:20:40.933084+0000 mon.a (mon.0) 1036 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-09T21:20:42.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:41 vm07 bash[28052]: cluster 2026-03-09T21:20:41.770494+0000 mgr.y (mgr.24416) 180 : cluster [DBG] pgmap v259: 164 pgs: 164 active+clean; 455 KiB data, 368 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:42.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:41 vm07 bash[28052]: cluster 2026-03-09T21:20:41.770494+0000 mgr.y (mgr.24416) 180 : cluster [DBG] pgmap v259: 164 pgs: 164 active+clean; 455 KiB data, 368 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:42.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:41 vm10 bash[23387]: cluster 2026-03-09T21:20:40.933084+0000 mon.a (mon.0) 1036 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-09T21:20:42.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:41 vm10 bash[23387]: cluster 2026-03-09T21:20:40.933084+0000 mon.a (mon.0) 1036 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-09T21:20:42.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:41 vm10 bash[23387]: cluster 2026-03-09T21:20:41.770494+0000 mgr.y (mgr.24416) 180 : cluster [DBG] pgmap v259: 164 pgs: 164 active+clean; 455 KiB data, 368 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:42.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:41 vm10 bash[23387]: cluster 2026-03-09T21:20:41.770494+0000 mgr.y (mgr.24416) 180 : cluster [DBG] pgmap v259: 164 pgs: 164 active+clean; 455 KiB data, 368 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:43.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:43 vm07 bash[20771]: cluster 2026-03-09T21:20:41.944885+0000 mon.a (mon.0) 1037 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:43.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:43 vm07 bash[20771]: cluster 2026-03-09T21:20:41.944885+0000 mon.a (mon.0) 1037 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:43.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:43 vm07 bash[20771]: cluster 2026-03-09T21:20:41.976228+0000 mon.a (mon.0) 1038 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-09T21:20:43.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:43 vm07 bash[20771]: cluster 2026-03-09T21:20:41.976228+0000 mon.a (mon.0) 1038 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-09T21:20:43.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:43 vm07 bash[20771]: audit 2026-03-09T21:20:41.992836+0000 mon.c (mon.2) 89 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:20:43.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:43 vm07 bash[20771]: audit 2026-03-09T21:20:41.992836+0000 mon.c (mon.2) 89 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:20:43.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:43 vm07 bash[28052]: cluster 2026-03-09T21:20:41.944885+0000 mon.a (mon.0) 1037 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:43.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:43 vm07 bash[28052]: cluster 2026-03-09T21:20:41.944885+0000 mon.a (mon.0) 1037 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:43.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:43 vm07 bash[28052]: cluster 2026-03-09T21:20:41.976228+0000 mon.a (mon.0) 1038 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-09T21:20:43.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:43 vm07 bash[28052]: cluster 2026-03-09T21:20:41.976228+0000 mon.a (mon.0) 1038 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-09T21:20:43.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:43 vm07 bash[28052]: audit 2026-03-09T21:20:41.992836+0000 mon.c (mon.2) 89 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:20:43.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:43 vm07 bash[28052]: audit 2026-03-09T21:20:41.992836+0000 mon.c (mon.2) 89 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:20:43.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:43 vm10 bash[23387]: cluster 2026-03-09T21:20:41.944885+0000 mon.a (mon.0) 1037 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:43.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:43 vm10 bash[23387]: cluster 2026-03-09T21:20:41.944885+0000 mon.a (mon.0) 1037 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:43.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:43 vm10 bash[23387]: cluster 2026-03-09T21:20:41.976228+0000 mon.a (mon.0) 1038 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-09T21:20:43.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:43 vm10 bash[23387]: cluster 2026-03-09T21:20:41.976228+0000 mon.a (mon.0) 1038 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-09T21:20:43.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:43 vm10 bash[23387]: audit 2026-03-09T21:20:41.992836+0000 mon.c (mon.2) 89 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:20:43.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:43 vm10 bash[23387]: audit 2026-03-09T21:20:41.992836+0000 mon.c (mon.2) 89 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:20:44.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:44 vm07 bash[20771]: cluster 2026-03-09T21:20:43.053275+0000 mon.a (mon.0) 1039 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-09T21:20:44.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:44 vm07 bash[20771]: cluster 2026-03-09T21:20:43.053275+0000 mon.a (mon.0) 1039 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-09T21:20:44.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:44 vm07 bash[20771]: audit 2026-03-09T21:20:43.089407+0000 mon.c (mon.2) 90 : audit [INF] from='client.? 192.168.123.107:0/1568786524' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:44.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:44 vm07 bash[20771]: audit 2026-03-09T21:20:43.089407+0000 mon.c (mon.2) 90 : audit [INF] from='client.? 192.168.123.107:0/1568786524' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:44.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:44 vm07 bash[20771]: audit 2026-03-09T21:20:43.089797+0000 mon.a (mon.0) 1040 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:44.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:44 vm07 bash[20771]: audit 2026-03-09T21:20:43.089797+0000 mon.a (mon.0) 1040 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:44.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:44 vm07 bash[20771]: cluster 2026-03-09T21:20:43.770885+0000 mgr.y (mgr.24416) 181 : cluster [DBG] pgmap v262: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 368 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:44.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:44 vm07 bash[20771]: cluster 2026-03-09T21:20:43.770885+0000 mgr.y (mgr.24416) 181 : cluster [DBG] pgmap v262: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 368 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:44.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:44 vm07 bash[28052]: cluster 2026-03-09T21:20:43.053275+0000 mon.a (mon.0) 1039 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-09T21:20:44.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:44 vm07 bash[28052]: cluster 2026-03-09T21:20:43.053275+0000 mon.a (mon.0) 1039 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-09T21:20:44.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:44 vm07 bash[28052]: audit 2026-03-09T21:20:43.089407+0000 mon.c (mon.2) 90 : audit [INF] from='client.? 192.168.123.107:0/1568786524' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:44.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:44 vm07 bash[28052]: audit 2026-03-09T21:20:43.089407+0000 mon.c (mon.2) 90 : audit [INF] from='client.? 192.168.123.107:0/1568786524' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:44.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:44 vm07 bash[28052]: audit 2026-03-09T21:20:43.089797+0000 mon.a (mon.0) 1040 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:44.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:44 vm07 bash[28052]: audit 2026-03-09T21:20:43.089797+0000 mon.a (mon.0) 1040 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:44.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:44 vm07 bash[28052]: cluster 2026-03-09T21:20:43.770885+0000 mgr.y (mgr.24416) 181 : cluster [DBG] pgmap v262: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 368 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:44.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:44 vm07 bash[28052]: cluster 2026-03-09T21:20:43.770885+0000 mgr.y (mgr.24416) 181 : cluster [DBG] pgmap v262: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 368 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:44 vm10 bash[23387]: cluster 2026-03-09T21:20:43.053275+0000 mon.a (mon.0) 1039 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-09T21:20:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:44 vm10 bash[23387]: cluster 2026-03-09T21:20:43.053275+0000 mon.a (mon.0) 1039 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-09T21:20:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:44 vm10 bash[23387]: audit 2026-03-09T21:20:43.089407+0000 mon.c (mon.2) 90 : audit [INF] from='client.? 192.168.123.107:0/1568786524' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:44 vm10 bash[23387]: audit 2026-03-09T21:20:43.089407+0000 mon.c (mon.2) 90 : audit [INF] from='client.? 192.168.123.107:0/1568786524' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:44 vm10 bash[23387]: audit 2026-03-09T21:20:43.089797+0000 mon.a (mon.0) 1040 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:44 vm10 bash[23387]: audit 2026-03-09T21:20:43.089797+0000 mon.a (mon.0) 1040 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:44 vm10 bash[23387]: cluster 2026-03-09T21:20:43.770885+0000 mgr.y (mgr.24416) 181 : cluster [DBG] pgmap v262: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 368 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:44 vm10 bash[23387]: cluster 2026-03-09T21:20:43.770885+0000 mgr.y (mgr.24416) 181 : cluster [DBG] pgmap v262: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 368 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:45.088 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_set_omap PASSED [ 51%] 2026-03-09T21:20:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:45 vm07 bash[20771]: audit 2026-03-09T21:20:44.054271+0000 mon.a (mon.0) 1041 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:45 vm07 bash[20771]: audit 2026-03-09T21:20:44.054271+0000 mon.a (mon.0) 1041 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:45 vm07 bash[20771]: cluster 2026-03-09T21:20:44.075914+0000 mon.a (mon.0) 1042 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-09T21:20:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:45 vm07 bash[20771]: cluster 2026-03-09T21:20:44.075914+0000 mon.a (mon.0) 1042 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-09T21:20:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:45 vm07 bash[28052]: audit 2026-03-09T21:20:44.054271+0000 mon.a (mon.0) 1041 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:45 vm07 bash[28052]: audit 2026-03-09T21:20:44.054271+0000 mon.a (mon.0) 1041 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:45 vm07 bash[28052]: cluster 2026-03-09T21:20:44.075914+0000 mon.a (mon.0) 1042 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-09T21:20:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:45 vm07 bash[28052]: cluster 2026-03-09T21:20:44.075914+0000 mon.a (mon.0) 1042 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-09T21:20:45.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:45 vm10 bash[23387]: audit 2026-03-09T21:20:44.054271+0000 mon.a (mon.0) 1041 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:45.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:45 vm10 bash[23387]: audit 2026-03-09T21:20:44.054271+0000 mon.a (mon.0) 1041 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:45.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:45 vm10 bash[23387]: cluster 2026-03-09T21:20:44.075914+0000 mon.a (mon.0) 1042 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-09T21:20:45.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:45 vm10 bash[23387]: cluster 2026-03-09T21:20:44.075914+0000 mon.a (mon.0) 1042 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-09T21:20:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:46 vm07 bash[20771]: cluster 2026-03-09T21:20:45.084563+0000 mon.a (mon.0) 1043 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-09T21:20:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:46 vm07 bash[20771]: cluster 2026-03-09T21:20:45.084563+0000 mon.a (mon.0) 1043 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-09T21:20:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:46 vm07 bash[20771]: cluster 2026-03-09T21:20:45.771220+0000 mgr.y (mgr.24416) 182 : cluster [DBG] pgmap v265: 164 pgs: 164 active+clean; 455 KiB data, 368 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:46 vm07 bash[20771]: cluster 2026-03-09T21:20:45.771220+0000 mgr.y (mgr.24416) 182 : cluster [DBG] pgmap v265: 164 pgs: 164 active+clean; 455 KiB data, 368 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:46.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:46 vm07 bash[28052]: cluster 2026-03-09T21:20:45.084563+0000 mon.a (mon.0) 1043 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-09T21:20:46.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:46 vm07 bash[28052]: cluster 2026-03-09T21:20:45.084563+0000 mon.a (mon.0) 1043 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-09T21:20:46.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:46 vm07 bash[28052]: cluster 2026-03-09T21:20:45.771220+0000 mgr.y (mgr.24416) 182 : cluster [DBG] pgmap v265: 164 pgs: 164 active+clean; 455 KiB data, 368 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:46.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:46 vm07 bash[28052]: cluster 2026-03-09T21:20:45.771220+0000 mgr.y (mgr.24416) 182 : cluster [DBG] pgmap v265: 164 pgs: 164 active+clean; 455 KiB data, 368 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:46.371 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:46 vm10 bash[23387]: cluster 2026-03-09T21:20:45.084563+0000 mon.a (mon.0) 1043 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-09T21:20:46.371 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:46 vm10 bash[23387]: cluster 2026-03-09T21:20:45.084563+0000 mon.a (mon.0) 1043 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-09T21:20:46.371 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:46 vm10 bash[23387]: cluster 2026-03-09T21:20:45.771220+0000 mgr.y (mgr.24416) 182 : cluster [DBG] pgmap v265: 164 pgs: 164 active+clean; 455 KiB data, 368 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:46.371 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:46 vm10 bash[23387]: cluster 2026-03-09T21:20:45.771220+0000 mgr.y (mgr.24416) 182 : cluster [DBG] pgmap v265: 164 pgs: 164 active+clean; 455 KiB data, 368 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:46.692 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:20:46 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:20:47.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:47 vm10 bash[23387]: cluster 2026-03-09T21:20:46.105928+0000 mon.a (mon.0) 1044 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-09T21:20:47.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:47 vm10 bash[23387]: cluster 2026-03-09T21:20:46.105928+0000 mon.a (mon.0) 1044 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-09T21:20:47.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:47 vm10 bash[23387]: audit 2026-03-09T21:20:46.371517+0000 mgr.y (mgr.24416) 183 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:47.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:47 vm10 bash[23387]: audit 2026-03-09T21:20:46.371517+0000 mgr.y (mgr.24416) 183 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:47.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:47 vm07 bash[20771]: cluster 2026-03-09T21:20:46.105928+0000 mon.a (mon.0) 1044 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-09T21:20:47.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:47 vm07 bash[20771]: cluster 2026-03-09T21:20:46.105928+0000 mon.a (mon.0) 1044 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-09T21:20:47.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:47 vm07 bash[20771]: audit 2026-03-09T21:20:46.371517+0000 mgr.y (mgr.24416) 183 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:47.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:47 vm07 bash[20771]: audit 2026-03-09T21:20:46.371517+0000 mgr.y (mgr.24416) 183 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:47.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:47 vm07 bash[28052]: cluster 2026-03-09T21:20:46.105928+0000 mon.a (mon.0) 1044 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-09T21:20:47.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:47 vm07 bash[28052]: cluster 2026-03-09T21:20:46.105928+0000 mon.a (mon.0) 1044 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-09T21:20:47.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:47 vm07 bash[28052]: audit 2026-03-09T21:20:46.371517+0000 mgr.y (mgr.24416) 183 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:47.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:47 vm07 bash[28052]: audit 2026-03-09T21:20:46.371517+0000 mgr.y (mgr.24416) 183 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:48.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:48 vm10 bash[23387]: cluster 2026-03-09T21:20:47.110545+0000 mon.a (mon.0) 1045 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-09T21:20:48.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:48 vm10 bash[23387]: cluster 2026-03-09T21:20:47.110545+0000 mon.a (mon.0) 1045 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-09T21:20:48.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:48 vm10 bash[23387]: audit 2026-03-09T21:20:47.158084+0000 mon.c (mon.2) 91 : audit [INF] from='client.? 192.168.123.107:0/611600942' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:48.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:48 vm10 bash[23387]: audit 2026-03-09T21:20:47.158084+0000 mon.c (mon.2) 91 : audit [INF] from='client.? 192.168.123.107:0/611600942' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:48.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:48 vm10 bash[23387]: audit 2026-03-09T21:20:47.158997+0000 mon.a (mon.0) 1046 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:48.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:48 vm10 bash[23387]: audit 2026-03-09T21:20:47.158997+0000 mon.a (mon.0) 1046 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:48.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:48 vm10 bash[23387]: cluster 2026-03-09T21:20:47.771853+0000 mgr.y (mgr.24416) 184 : cluster [DBG] pgmap v268: 196 pgs: 196 active+clean; 455 KiB data, 404 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:20:48.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:48 vm10 bash[23387]: cluster 2026-03-09T21:20:47.771853+0000 mgr.y (mgr.24416) 184 : cluster [DBG] pgmap v268: 196 pgs: 196 active+clean; 455 KiB data, 404 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:20:48.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:48 vm10 bash[23387]: audit 2026-03-09T21:20:48.104745+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:48.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:48 vm10 bash[23387]: audit 2026-03-09T21:20:48.104745+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:48.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:48 vm10 bash[23387]: cluster 2026-03-09T21:20:48.107883+0000 mon.a (mon.0) 1048 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-09T21:20:48.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:48 vm10 bash[23387]: cluster 2026-03-09T21:20:48.107883+0000 mon.a (mon.0) 1048 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-09T21:20:48.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:48 vm07 bash[20771]: cluster 2026-03-09T21:20:47.110545+0000 mon.a (mon.0) 1045 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-09T21:20:48.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:48 vm07 bash[20771]: cluster 2026-03-09T21:20:47.110545+0000 mon.a (mon.0) 1045 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-09T21:20:48.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:48 vm07 bash[20771]: audit 2026-03-09T21:20:47.158084+0000 mon.c (mon.2) 91 : audit [INF] from='client.? 192.168.123.107:0/611600942' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:48.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:48 vm07 bash[20771]: audit 2026-03-09T21:20:47.158084+0000 mon.c (mon.2) 91 : audit [INF] from='client.? 192.168.123.107:0/611600942' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:48.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:48 vm07 bash[20771]: audit 2026-03-09T21:20:47.158997+0000 mon.a (mon.0) 1046 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:48.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:48 vm07 bash[20771]: audit 2026-03-09T21:20:47.158997+0000 mon.a (mon.0) 1046 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:48.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:48 vm07 bash[20771]: cluster 2026-03-09T21:20:47.771853+0000 mgr.y (mgr.24416) 184 : cluster [DBG] pgmap v268: 196 pgs: 196 active+clean; 455 KiB data, 404 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:20:48.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:48 vm07 bash[20771]: cluster 2026-03-09T21:20:47.771853+0000 mgr.y (mgr.24416) 184 : cluster [DBG] pgmap v268: 196 pgs: 196 active+clean; 455 KiB data, 404 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:20:48.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:48 vm07 bash[20771]: audit 2026-03-09T21:20:48.104745+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:48.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:48 vm07 bash[20771]: audit 2026-03-09T21:20:48.104745+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:48.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:48 vm07 bash[20771]: cluster 2026-03-09T21:20:48.107883+0000 mon.a (mon.0) 1048 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-09T21:20:48.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:48 vm07 bash[20771]: cluster 2026-03-09T21:20:48.107883+0000 mon.a (mon.0) 1048 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-09T21:20:48.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:48 vm07 bash[28052]: cluster 2026-03-09T21:20:47.110545+0000 mon.a (mon.0) 1045 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-09T21:20:48.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:48 vm07 bash[28052]: cluster 2026-03-09T21:20:47.110545+0000 mon.a (mon.0) 1045 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-09T21:20:48.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:48 vm07 bash[28052]: audit 2026-03-09T21:20:47.158084+0000 mon.c (mon.2) 91 : audit [INF] from='client.? 192.168.123.107:0/611600942' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:48.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:48 vm07 bash[28052]: audit 2026-03-09T21:20:47.158084+0000 mon.c (mon.2) 91 : audit [INF] from='client.? 192.168.123.107:0/611600942' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:48.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:48 vm07 bash[28052]: audit 2026-03-09T21:20:47.158997+0000 mon.a (mon.0) 1046 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:48.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:48 vm07 bash[28052]: audit 2026-03-09T21:20:47.158997+0000 mon.a (mon.0) 1046 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:48.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:48 vm07 bash[28052]: cluster 2026-03-09T21:20:47.771853+0000 mgr.y (mgr.24416) 184 : cluster [DBG] pgmap v268: 196 pgs: 196 active+clean; 455 KiB data, 404 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:20:48.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:48 vm07 bash[28052]: cluster 2026-03-09T21:20:47.771853+0000 mgr.y (mgr.24416) 184 : cluster [DBG] pgmap v268: 196 pgs: 196 active+clean; 455 KiB data, 404 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:20:48.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:48 vm07 bash[28052]: audit 2026-03-09T21:20:48.104745+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:48.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:48 vm07 bash[28052]: audit 2026-03-09T21:20:48.104745+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:48.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:48 vm07 bash[28052]: cluster 2026-03-09T21:20:48.107883+0000 mon.a (mon.0) 1048 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-09T21:20:48.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:48 vm07 bash[28052]: cluster 2026-03-09T21:20:48.107883+0000 mon.a (mon.0) 1048 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-09T21:20:49.115 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:20:48 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:20:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:20:49.127 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_set_omap_aio PASSED [ 52%] 2026-03-09T21:20:49.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:49 vm10 bash[23387]: cluster 2026-03-09T21:20:48.130389+0000 mon.a (mon.0) 1049 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:49.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:49 vm10 bash[23387]: cluster 2026-03-09T21:20:48.130389+0000 mon.a (mon.0) 1049 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:49.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:49 vm10 bash[23387]: cluster 2026-03-09T21:20:49.116242+0000 mon.a (mon.0) 1050 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-09T21:20:49.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:49 vm10 bash[23387]: cluster 2026-03-09T21:20:49.116242+0000 mon.a (mon.0) 1050 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-09T21:20:49.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:49 vm07 bash[20771]: cluster 2026-03-09T21:20:48.130389+0000 mon.a (mon.0) 1049 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:49.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:49 vm07 bash[20771]: cluster 2026-03-09T21:20:48.130389+0000 mon.a (mon.0) 1049 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:49.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:49 vm07 bash[20771]: cluster 2026-03-09T21:20:49.116242+0000 mon.a (mon.0) 1050 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-09T21:20:49.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:49 vm07 bash[20771]: cluster 2026-03-09T21:20:49.116242+0000 mon.a (mon.0) 1050 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-09T21:20:49.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:49 vm07 bash[28052]: cluster 2026-03-09T21:20:48.130389+0000 mon.a (mon.0) 1049 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:49.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:49 vm07 bash[28052]: cluster 2026-03-09T21:20:48.130389+0000 mon.a (mon.0) 1049 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:49.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:49 vm07 bash[28052]: cluster 2026-03-09T21:20:49.116242+0000 mon.a (mon.0) 1050 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-09T21:20:49.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:49 vm07 bash[28052]: cluster 2026-03-09T21:20:49.116242+0000 mon.a (mon.0) 1050 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-09T21:20:50.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:50 vm10 bash[23387]: cluster 2026-03-09T21:20:49.772139+0000 mgr.y (mgr.24416) 185 : cluster [DBG] pgmap v271: 164 pgs: 164 active+clean; 455 KiB data, 404 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:50.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:50 vm10 bash[23387]: cluster 2026-03-09T21:20:49.772139+0000 mgr.y (mgr.24416) 185 : cluster [DBG] pgmap v271: 164 pgs: 164 active+clean; 455 KiB data, 404 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:50.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:50 vm10 bash[23387]: cluster 2026-03-09T21:20:50.117217+0000 mon.a (mon.0) 1051 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-09T21:20:50.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:50 vm10 bash[23387]: cluster 2026-03-09T21:20:50.117217+0000 mon.a (mon.0) 1051 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-09T21:20:50.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:50 vm07 bash[20771]: cluster 2026-03-09T21:20:49.772139+0000 mgr.y (mgr.24416) 185 : cluster [DBG] pgmap v271: 164 pgs: 164 active+clean; 455 KiB data, 404 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:50.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:50 vm07 bash[20771]: cluster 2026-03-09T21:20:49.772139+0000 mgr.y (mgr.24416) 185 : cluster [DBG] pgmap v271: 164 pgs: 164 active+clean; 455 KiB data, 404 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:50.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:50 vm07 bash[20771]: cluster 2026-03-09T21:20:50.117217+0000 mon.a (mon.0) 1051 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-09T21:20:50.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:50 vm07 bash[20771]: cluster 2026-03-09T21:20:50.117217+0000 mon.a (mon.0) 1051 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-09T21:20:50.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:50 vm07 bash[28052]: cluster 2026-03-09T21:20:49.772139+0000 mgr.y (mgr.24416) 185 : cluster [DBG] pgmap v271: 164 pgs: 164 active+clean; 455 KiB data, 404 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:50.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:50 vm07 bash[28052]: cluster 2026-03-09T21:20:49.772139+0000 mgr.y (mgr.24416) 185 : cluster [DBG] pgmap v271: 164 pgs: 164 active+clean; 455 KiB data, 404 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:50.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:50 vm07 bash[28052]: cluster 2026-03-09T21:20:50.117217+0000 mon.a (mon.0) 1051 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-09T21:20:50.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:50 vm07 bash[28052]: cluster 2026-03-09T21:20:50.117217+0000 mon.a (mon.0) 1051 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-09T21:20:52.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:52 vm10 bash[23387]: cluster 2026-03-09T21:20:51.126282+0000 mon.a (mon.0) 1052 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-09T21:20:52.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:52 vm10 bash[23387]: cluster 2026-03-09T21:20:51.126282+0000 mon.a (mon.0) 1052 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-09T21:20:52.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:52 vm10 bash[23387]: audit 2026-03-09T21:20:51.161837+0000 mon.b (mon.1) 55 : audit [INF] from='client.? 192.168.123.107:0/765569974' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:52.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:52 vm10 bash[23387]: audit 2026-03-09T21:20:51.161837+0000 mon.b (mon.1) 55 : audit [INF] from='client.? 192.168.123.107:0/765569974' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:52.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:52 vm10 bash[23387]: audit 2026-03-09T21:20:51.162307+0000 mon.a (mon.0) 1053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:52.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:52 vm10 bash[23387]: audit 2026-03-09T21:20:51.162307+0000 mon.a (mon.0) 1053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:52.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:52 vm10 bash[23387]: cluster 2026-03-09T21:20:51.772422+0000 mgr.y (mgr.24416) 186 : cluster [DBG] pgmap v274: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 404 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:20:52.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:52 vm10 bash[23387]: cluster 2026-03-09T21:20:51.772422+0000 mgr.y (mgr.24416) 186 : cluster [DBG] pgmap v274: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 404 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:20:52.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:52 vm07 bash[20771]: cluster 2026-03-09T21:20:51.126282+0000 mon.a (mon.0) 1052 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-09T21:20:52.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:52 vm07 bash[20771]: cluster 2026-03-09T21:20:51.126282+0000 mon.a (mon.0) 1052 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-09T21:20:52.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:52 vm07 bash[20771]: audit 2026-03-09T21:20:51.161837+0000 mon.b (mon.1) 55 : audit [INF] from='client.? 192.168.123.107:0/765569974' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:52.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:52 vm07 bash[20771]: audit 2026-03-09T21:20:51.161837+0000 mon.b (mon.1) 55 : audit [INF] from='client.? 192.168.123.107:0/765569974' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:52.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:52 vm07 bash[20771]: audit 2026-03-09T21:20:51.162307+0000 mon.a (mon.0) 1053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:52.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:52 vm07 bash[20771]: audit 2026-03-09T21:20:51.162307+0000 mon.a (mon.0) 1053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:52.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:52 vm07 bash[20771]: cluster 2026-03-09T21:20:51.772422+0000 mgr.y (mgr.24416) 186 : cluster [DBG] pgmap v274: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 404 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:20:52.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:52 vm07 bash[20771]: cluster 2026-03-09T21:20:51.772422+0000 mgr.y (mgr.24416) 186 : cluster [DBG] pgmap v274: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 404 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:20:52.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:52 vm07 bash[28052]: cluster 2026-03-09T21:20:51.126282+0000 mon.a (mon.0) 1052 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-09T21:20:52.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:52 vm07 bash[28052]: cluster 2026-03-09T21:20:51.126282+0000 mon.a (mon.0) 1052 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-09T21:20:52.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:52 vm07 bash[28052]: audit 2026-03-09T21:20:51.161837+0000 mon.b (mon.1) 55 : audit [INF] from='client.? 192.168.123.107:0/765569974' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:52.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:52 vm07 bash[28052]: audit 2026-03-09T21:20:51.161837+0000 mon.b (mon.1) 55 : audit [INF] from='client.? 192.168.123.107:0/765569974' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:52.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:52 vm07 bash[28052]: audit 2026-03-09T21:20:51.162307+0000 mon.a (mon.0) 1053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:52.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:52 vm07 bash[28052]: audit 2026-03-09T21:20:51.162307+0000 mon.a (mon.0) 1053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:52.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:52 vm07 bash[28052]: cluster 2026-03-09T21:20:51.772422+0000 mgr.y (mgr.24416) 186 : cluster [DBG] pgmap v274: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 404 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:20:52.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:52 vm07 bash[28052]: cluster 2026-03-09T21:20:51.772422+0000 mgr.y (mgr.24416) 186 : cluster [DBG] pgmap v274: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 404 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:20:53.144 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_write_ops PASSED [ 53%] 2026-03-09T21:20:53.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:53 vm10 bash[23387]: audit 2026-03-09T21:20:52.131471+0000 mon.a (mon.0) 1054 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:53.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:53 vm10 bash[23387]: audit 2026-03-09T21:20:52.131471+0000 mon.a (mon.0) 1054 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:53.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:53 vm10 bash[23387]: cluster 2026-03-09T21:20:52.135290+0000 mon.a (mon.0) 1055 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-09T21:20:53.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:53 vm10 bash[23387]: cluster 2026-03-09T21:20:52.135290+0000 mon.a (mon.0) 1055 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-09T21:20:53.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:53 vm07 bash[20771]: audit 2026-03-09T21:20:52.131471+0000 mon.a (mon.0) 1054 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:53.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:53 vm07 bash[20771]: audit 2026-03-09T21:20:52.131471+0000 mon.a (mon.0) 1054 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:53.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:53 vm07 bash[20771]: cluster 2026-03-09T21:20:52.135290+0000 mon.a (mon.0) 1055 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-09T21:20:53.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:53 vm07 bash[20771]: cluster 2026-03-09T21:20:52.135290+0000 mon.a (mon.0) 1055 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-09T21:20:53.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:53 vm07 bash[28052]: audit 2026-03-09T21:20:52.131471+0000 mon.a (mon.0) 1054 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:53.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:53 vm07 bash[28052]: audit 2026-03-09T21:20:52.131471+0000 mon.a (mon.0) 1054 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:53.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:53 vm07 bash[28052]: cluster 2026-03-09T21:20:52.135290+0000 mon.a (mon.0) 1055 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-09T21:20:53.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:53 vm07 bash[28052]: cluster 2026-03-09T21:20:52.135290+0000 mon.a (mon.0) 1055 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-09T21:20:54.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:54 vm10 bash[23387]: cluster 2026-03-09T21:20:53.139351+0000 mon.a (mon.0) 1056 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-09T21:20:54.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:54 vm10 bash[23387]: cluster 2026-03-09T21:20:53.139351+0000 mon.a (mon.0) 1056 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-09T21:20:54.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:54 vm10 bash[23387]: cluster 2026-03-09T21:20:53.772719+0000 mgr.y (mgr.24416) 187 : cluster [DBG] pgmap v277: 164 pgs: 164 active+clean; 455 KiB data, 409 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:54.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:54 vm10 bash[23387]: cluster 2026-03-09T21:20:53.772719+0000 mgr.y (mgr.24416) 187 : cluster [DBG] pgmap v277: 164 pgs: 164 active+clean; 455 KiB data, 409 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:54.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:54 vm07 bash[20771]: cluster 2026-03-09T21:20:53.139351+0000 mon.a (mon.0) 1056 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-09T21:20:54.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:54 vm07 bash[20771]: cluster 2026-03-09T21:20:53.139351+0000 mon.a (mon.0) 1056 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-09T21:20:54.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:54 vm07 bash[20771]: cluster 2026-03-09T21:20:53.772719+0000 mgr.y (mgr.24416) 187 : cluster [DBG] pgmap v277: 164 pgs: 164 active+clean; 455 KiB data, 409 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:54.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:54 vm07 bash[20771]: cluster 2026-03-09T21:20:53.772719+0000 mgr.y (mgr.24416) 187 : cluster [DBG] pgmap v277: 164 pgs: 164 active+clean; 455 KiB data, 409 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:54.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:54 vm07 bash[28052]: cluster 2026-03-09T21:20:53.139351+0000 mon.a (mon.0) 1056 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-09T21:20:54.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:54 vm07 bash[28052]: cluster 2026-03-09T21:20:53.139351+0000 mon.a (mon.0) 1056 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-09T21:20:54.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:54 vm07 bash[28052]: cluster 2026-03-09T21:20:53.772719+0000 mgr.y (mgr.24416) 187 : cluster [DBG] pgmap v277: 164 pgs: 164 active+clean; 455 KiB data, 409 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:54.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:54 vm07 bash[28052]: cluster 2026-03-09T21:20:53.772719+0000 mgr.y (mgr.24416) 187 : cluster [DBG] pgmap v277: 164 pgs: 164 active+clean; 455 KiB data, 409 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:55.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:55 vm10 bash[23387]: cluster 2026-03-09T21:20:54.150353+0000 mon.a (mon.0) 1057 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:55.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:55 vm10 bash[23387]: cluster 2026-03-09T21:20:54.150353+0000 mon.a (mon.0) 1057 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:55.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:55 vm10 bash[23387]: cluster 2026-03-09T21:20:54.167276+0000 mon.a (mon.0) 1058 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-09T21:20:55.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:55 vm10 bash[23387]: cluster 2026-03-09T21:20:54.167276+0000 mon.a (mon.0) 1058 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-09T21:20:55.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:55 vm07 bash[20771]: cluster 2026-03-09T21:20:54.150353+0000 mon.a (mon.0) 1057 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:55.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:55 vm07 bash[20771]: cluster 2026-03-09T21:20:54.150353+0000 mon.a (mon.0) 1057 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:55.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:55 vm07 bash[20771]: cluster 2026-03-09T21:20:54.167276+0000 mon.a (mon.0) 1058 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-09T21:20:55.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:55 vm07 bash[20771]: cluster 2026-03-09T21:20:54.167276+0000 mon.a (mon.0) 1058 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-09T21:20:55.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:55 vm07 bash[28052]: cluster 2026-03-09T21:20:54.150353+0000 mon.a (mon.0) 1057 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:55.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:55 vm07 bash[28052]: cluster 2026-03-09T21:20:54.150353+0000 mon.a (mon.0) 1057 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:20:55.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:55 vm07 bash[28052]: cluster 2026-03-09T21:20:54.167276+0000 mon.a (mon.0) 1058 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-09T21:20:55.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:55 vm07 bash[28052]: cluster 2026-03-09T21:20:54.167276+0000 mon.a (mon.0) 1058 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-09T21:20:56.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:56 vm10 bash[23387]: cluster 2026-03-09T21:20:55.170782+0000 mon.a (mon.0) 1059 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-09T21:20:56.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:56 vm10 bash[23387]: cluster 2026-03-09T21:20:55.170782+0000 mon.a (mon.0) 1059 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-09T21:20:56.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:56 vm10 bash[23387]: audit 2026-03-09T21:20:55.217349+0000 mon.a (mon.0) 1060 : audit [INF] from='client.? 192.168.123.107:0/3980895950' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:56.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:56 vm10 bash[23387]: audit 2026-03-09T21:20:55.217349+0000 mon.a (mon.0) 1060 : audit [INF] from='client.? 192.168.123.107:0/3980895950' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:56.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:56 vm10 bash[23387]: cluster 2026-03-09T21:20:55.772988+0000 mgr.y (mgr.24416) 188 : cluster [DBG] pgmap v280: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 409 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:56.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:56 vm10 bash[23387]: cluster 2026-03-09T21:20:55.772988+0000 mgr.y (mgr.24416) 188 : cluster [DBG] pgmap v280: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 409 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:56.442 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:20:56 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:20:56.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:56 vm07 bash[20771]: cluster 2026-03-09T21:20:55.170782+0000 mon.a (mon.0) 1059 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-09T21:20:56.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:56 vm07 bash[20771]: cluster 2026-03-09T21:20:55.170782+0000 mon.a (mon.0) 1059 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-09T21:20:56.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:56 vm07 bash[20771]: audit 2026-03-09T21:20:55.217349+0000 mon.a (mon.0) 1060 : audit [INF] from='client.? 192.168.123.107:0/3980895950' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:56.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:56 vm07 bash[20771]: audit 2026-03-09T21:20:55.217349+0000 mon.a (mon.0) 1060 : audit [INF] from='client.? 192.168.123.107:0/3980895950' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:56.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:56 vm07 bash[20771]: cluster 2026-03-09T21:20:55.772988+0000 mgr.y (mgr.24416) 188 : cluster [DBG] pgmap v280: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 409 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:56.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:56 vm07 bash[20771]: cluster 2026-03-09T21:20:55.772988+0000 mgr.y (mgr.24416) 188 : cluster [DBG] pgmap v280: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 409 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:56.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:56 vm07 bash[28052]: cluster 2026-03-09T21:20:55.170782+0000 mon.a (mon.0) 1059 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-09T21:20:56.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:56 vm07 bash[28052]: cluster 2026-03-09T21:20:55.170782+0000 mon.a (mon.0) 1059 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-09T21:20:56.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:56 vm07 bash[28052]: audit 2026-03-09T21:20:55.217349+0000 mon.a (mon.0) 1060 : audit [INF] from='client.? 192.168.123.107:0/3980895950' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:56.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:56 vm07 bash[28052]: audit 2026-03-09T21:20:55.217349+0000 mon.a (mon.0) 1060 : audit [INF] from='client.? 192.168.123.107:0/3980895950' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:20:56.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:56 vm07 bash[28052]: cluster 2026-03-09T21:20:55.772988+0000 mgr.y (mgr.24416) 188 : cluster [DBG] pgmap v280: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 409 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:56.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:56 vm07 bash[28052]: cluster 2026-03-09T21:20:55.772988+0000 mgr.y (mgr.24416) 188 : cluster [DBG] pgmap v280: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 409 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:57.201 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_execute_op PASSED [ 54%] 2026-03-09T21:20:57.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:57 vm07 bash[20771]: audit 2026-03-09T21:20:56.190437+0000 mon.a (mon.0) 1061 : audit [INF] from='client.? 192.168.123.107:0/3980895950' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:57.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:57 vm07 bash[20771]: audit 2026-03-09T21:20:56.190437+0000 mon.a (mon.0) 1061 : audit [INF] from='client.? 192.168.123.107:0/3980895950' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:57.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:57 vm07 bash[20771]: cluster 2026-03-09T21:20:56.200398+0000 mon.a (mon.0) 1062 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-09T21:20:57.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:57 vm07 bash[20771]: cluster 2026-03-09T21:20:56.200398+0000 mon.a (mon.0) 1062 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-09T21:20:57.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:57 vm07 bash[20771]: audit 2026-03-09T21:20:56.374882+0000 mgr.y (mgr.24416) 189 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:57.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:57 vm07 bash[20771]: audit 2026-03-09T21:20:56.374882+0000 mgr.y (mgr.24416) 189 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:57.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:57 vm07 bash[20771]: audit 2026-03-09T21:20:56.999245+0000 mon.c (mon.2) 92 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:20:57.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:57 vm07 bash[20771]: audit 2026-03-09T21:20:56.999245+0000 mon.c (mon.2) 92 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:20:57.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:57 vm07 bash[28052]: audit 2026-03-09T21:20:56.190437+0000 mon.a (mon.0) 1061 : audit [INF] from='client.? 192.168.123.107:0/3980895950' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:57.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:57 vm07 bash[28052]: audit 2026-03-09T21:20:56.190437+0000 mon.a (mon.0) 1061 : audit [INF] from='client.? 192.168.123.107:0/3980895950' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:57.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:57 vm07 bash[28052]: cluster 2026-03-09T21:20:56.200398+0000 mon.a (mon.0) 1062 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-09T21:20:57.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:57 vm07 bash[28052]: cluster 2026-03-09T21:20:56.200398+0000 mon.a (mon.0) 1062 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-09T21:20:57.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:57 vm07 bash[28052]: audit 2026-03-09T21:20:56.374882+0000 mgr.y (mgr.24416) 189 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:57.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:57 vm07 bash[28052]: audit 2026-03-09T21:20:56.374882+0000 mgr.y (mgr.24416) 189 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:57.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:57 vm07 bash[28052]: audit 2026-03-09T21:20:56.999245+0000 mon.c (mon.2) 92 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:20:57.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:57 vm07 bash[28052]: audit 2026-03-09T21:20:56.999245+0000 mon.c (mon.2) 92 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:20:57.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:57 vm10 bash[23387]: audit 2026-03-09T21:20:56.190437+0000 mon.a (mon.0) 1061 : audit [INF] from='client.? 192.168.123.107:0/3980895950' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:57.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:57 vm10 bash[23387]: audit 2026-03-09T21:20:56.190437+0000 mon.a (mon.0) 1061 : audit [INF] from='client.? 192.168.123.107:0/3980895950' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:20:57.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:57 vm10 bash[23387]: cluster 2026-03-09T21:20:56.200398+0000 mon.a (mon.0) 1062 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-09T21:20:57.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:57 vm10 bash[23387]: cluster 2026-03-09T21:20:56.200398+0000 mon.a (mon.0) 1062 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-09T21:20:57.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:57 vm10 bash[23387]: audit 2026-03-09T21:20:56.374882+0000 mgr.y (mgr.24416) 189 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:57.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:57 vm10 bash[23387]: audit 2026-03-09T21:20:56.374882+0000 mgr.y (mgr.24416) 189 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:20:57.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:57 vm10 bash[23387]: audit 2026-03-09T21:20:56.999245+0000 mon.c (mon.2) 92 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:20:57.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:57 vm10 bash[23387]: audit 2026-03-09T21:20:56.999245+0000 mon.c (mon.2) 92 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:20:58.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:58 vm07 bash[20771]: cluster 2026-03-09T21:20:57.196084+0000 mon.a (mon.0) 1063 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-09T21:20:58.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:58 vm07 bash[20771]: cluster 2026-03-09T21:20:57.196084+0000 mon.a (mon.0) 1063 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-09T21:20:58.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:58 vm07 bash[20771]: cluster 2026-03-09T21:20:57.773464+0000 mgr.y (mgr.24416) 190 : cluster [DBG] pgmap v283: 164 pgs: 164 active+clean; 455 KiB data, 409 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:58.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:58 vm07 bash[20771]: cluster 2026-03-09T21:20:57.773464+0000 mgr.y (mgr.24416) 190 : cluster [DBG] pgmap v283: 164 pgs: 164 active+clean; 455 KiB data, 409 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:58.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:58 vm07 bash[28052]: cluster 2026-03-09T21:20:57.196084+0000 mon.a (mon.0) 1063 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-09T21:20:58.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:58 vm07 bash[28052]: cluster 2026-03-09T21:20:57.196084+0000 mon.a (mon.0) 1063 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-09T21:20:58.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:58 vm07 bash[28052]: cluster 2026-03-09T21:20:57.773464+0000 mgr.y (mgr.24416) 190 : cluster [DBG] pgmap v283: 164 pgs: 164 active+clean; 455 KiB data, 409 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:58.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:58 vm07 bash[28052]: cluster 2026-03-09T21:20:57.773464+0000 mgr.y (mgr.24416) 190 : cluster [DBG] pgmap v283: 164 pgs: 164 active+clean; 455 KiB data, 409 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:58.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:58 vm10 bash[23387]: cluster 2026-03-09T21:20:57.196084+0000 mon.a (mon.0) 1063 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-09T21:20:58.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:58 vm10 bash[23387]: cluster 2026-03-09T21:20:57.196084+0000 mon.a (mon.0) 1063 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-09T21:20:58.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:58 vm10 bash[23387]: cluster 2026-03-09T21:20:57.773464+0000 mgr.y (mgr.24416) 190 : cluster [DBG] pgmap v283: 164 pgs: 164 active+clean; 455 KiB data, 409 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:58.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:58 vm10 bash[23387]: cluster 2026-03-09T21:20:57.773464+0000 mgr.y (mgr.24416) 190 : cluster [DBG] pgmap v283: 164 pgs: 164 active+clean; 455 KiB data, 409 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:20:59.115 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:20:58 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:20:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:20:59.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:59 vm07 bash[20771]: cluster 2026-03-09T21:20:58.241185+0000 mon.a (mon.0) 1064 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-09T21:20:59.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:20:59 vm07 bash[20771]: cluster 2026-03-09T21:20:58.241185+0000 mon.a (mon.0) 1064 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-09T21:20:59.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:59 vm07 bash[28052]: cluster 2026-03-09T21:20:58.241185+0000 mon.a (mon.0) 1064 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-09T21:20:59.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:20:59 vm07 bash[28052]: cluster 2026-03-09T21:20:58.241185+0000 mon.a (mon.0) 1064 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-09T21:20:59.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:59 vm10 bash[23387]: cluster 2026-03-09T21:20:58.241185+0000 mon.a (mon.0) 1064 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-09T21:20:59.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:20:59 vm10 bash[23387]: cluster 2026-03-09T21:20:58.241185+0000 mon.a (mon.0) 1064 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-09T21:21:00.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:00 vm07 bash[20771]: cluster 2026-03-09T21:20:59.257781+0000 mon.a (mon.0) 1065 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-09T21:21:00.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:00 vm07 bash[20771]: cluster 2026-03-09T21:20:59.257781+0000 mon.a (mon.0) 1065 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-09T21:21:00.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:00 vm07 bash[20771]: audit 2026-03-09T21:20:59.284585+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.107:0/884575754' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:00.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:00 vm07 bash[20771]: audit 2026-03-09T21:20:59.284585+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.107:0/884575754' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:00.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:00 vm07 bash[20771]: audit 2026-03-09T21:20:59.285082+0000 mon.a (mon.0) 1066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:00.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:00 vm07 bash[20771]: audit 2026-03-09T21:20:59.285082+0000 mon.a (mon.0) 1066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:00.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:00 vm07 bash[20771]: cluster 2026-03-09T21:20:59.773932+0000 mgr.y (mgr.24416) 191 : cluster [DBG] pgmap v286: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 409 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:00.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:00 vm07 bash[20771]: cluster 2026-03-09T21:20:59.773932+0000 mgr.y (mgr.24416) 191 : cluster [DBG] pgmap v286: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 409 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:00.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:00 vm07 bash[28052]: cluster 2026-03-09T21:20:59.257781+0000 mon.a (mon.0) 1065 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-09T21:21:00.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:00 vm07 bash[28052]: cluster 2026-03-09T21:20:59.257781+0000 mon.a (mon.0) 1065 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-09T21:21:00.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:00 vm07 bash[28052]: audit 2026-03-09T21:20:59.284585+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.107:0/884575754' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:00.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:00 vm07 bash[28052]: audit 2026-03-09T21:20:59.284585+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.107:0/884575754' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:00.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:00 vm07 bash[28052]: audit 2026-03-09T21:20:59.285082+0000 mon.a (mon.0) 1066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:00.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:00 vm07 bash[28052]: audit 2026-03-09T21:20:59.285082+0000 mon.a (mon.0) 1066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:00.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:00 vm07 bash[28052]: cluster 2026-03-09T21:20:59.773932+0000 mgr.y (mgr.24416) 191 : cluster [DBG] pgmap v286: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 409 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:00.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:00 vm07 bash[28052]: cluster 2026-03-09T21:20:59.773932+0000 mgr.y (mgr.24416) 191 : cluster [DBG] pgmap v286: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 409 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:00.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:00 vm10 bash[23387]: cluster 2026-03-09T21:20:59.257781+0000 mon.a (mon.0) 1065 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-09T21:21:00.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:00 vm10 bash[23387]: cluster 2026-03-09T21:20:59.257781+0000 mon.a (mon.0) 1065 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-09T21:21:00.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:00 vm10 bash[23387]: audit 2026-03-09T21:20:59.284585+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.107:0/884575754' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:00.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:00 vm10 bash[23387]: audit 2026-03-09T21:20:59.284585+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.107:0/884575754' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:00.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:00 vm10 bash[23387]: audit 2026-03-09T21:20:59.285082+0000 mon.a (mon.0) 1066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:00.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:00 vm10 bash[23387]: audit 2026-03-09T21:20:59.285082+0000 mon.a (mon.0) 1066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:00.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:00 vm10 bash[23387]: cluster 2026-03-09T21:20:59.773932+0000 mgr.y (mgr.24416) 191 : cluster [DBG] pgmap v286: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 409 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:00.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:00 vm10 bash[23387]: cluster 2026-03-09T21:20:59.773932+0000 mgr.y (mgr.24416) 191 : cluster [DBG] pgmap v286: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 409 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:01.257 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_writesame_op PASSED [ 56%] 2026-03-09T21:21:01.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:01 vm07 bash[20771]: cluster 2026-03-09T21:21:00.235407+0000 mon.a (mon.0) 1067 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:21:01.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:01 vm07 bash[20771]: cluster 2026-03-09T21:21:00.235407+0000 mon.a (mon.0) 1067 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:21:01.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:01 vm07 bash[20771]: audit 2026-03-09T21:21:00.237909+0000 mon.a (mon.0) 1068 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:01.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:01 vm07 bash[20771]: audit 2026-03-09T21:21:00.237909+0000 mon.a (mon.0) 1068 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:01.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:01 vm07 bash[20771]: cluster 2026-03-09T21:21:00.247342+0000 mon.a (mon.0) 1069 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-09T21:21:01.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:01 vm07 bash[20771]: cluster 2026-03-09T21:21:00.247342+0000 mon.a (mon.0) 1069 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-09T21:21:01.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:01 vm07 bash[28052]: cluster 2026-03-09T21:21:00.235407+0000 mon.a (mon.0) 1067 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:21:01.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:01 vm07 bash[28052]: cluster 2026-03-09T21:21:00.235407+0000 mon.a (mon.0) 1067 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:21:01.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:01 vm07 bash[28052]: audit 2026-03-09T21:21:00.237909+0000 mon.a (mon.0) 1068 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:01.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:01 vm07 bash[28052]: audit 2026-03-09T21:21:00.237909+0000 mon.a (mon.0) 1068 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:01.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:01 vm07 bash[28052]: cluster 2026-03-09T21:21:00.247342+0000 mon.a (mon.0) 1069 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-09T21:21:01.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:01 vm07 bash[28052]: cluster 2026-03-09T21:21:00.247342+0000 mon.a (mon.0) 1069 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-09T21:21:01.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:01 vm10 bash[23387]: cluster 2026-03-09T21:21:00.235407+0000 mon.a (mon.0) 1067 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:21:01.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:01 vm10 bash[23387]: cluster 2026-03-09T21:21:00.235407+0000 mon.a (mon.0) 1067 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:21:01.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:01 vm10 bash[23387]: audit 2026-03-09T21:21:00.237909+0000 mon.a (mon.0) 1068 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:01.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:01 vm10 bash[23387]: audit 2026-03-09T21:21:00.237909+0000 mon.a (mon.0) 1068 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:01.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:01 vm10 bash[23387]: cluster 2026-03-09T21:21:00.247342+0000 mon.a (mon.0) 1069 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-09T21:21:01.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:01 vm10 bash[23387]: cluster 2026-03-09T21:21:00.247342+0000 mon.a (mon.0) 1069 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-09T21:21:02.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:02 vm07 bash[20771]: cluster 2026-03-09T21:21:01.256073+0000 mon.a (mon.0) 1070 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-09T21:21:02.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:02 vm07 bash[20771]: cluster 2026-03-09T21:21:01.256073+0000 mon.a (mon.0) 1070 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-09T21:21:02.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:02 vm07 bash[20771]: cluster 2026-03-09T21:21:01.774288+0000 mgr.y (mgr.24416) 192 : cluster [DBG] pgmap v289: 164 pgs: 164 active+clean; 455 KiB data, 409 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:02.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:02 vm07 bash[20771]: cluster 2026-03-09T21:21:01.774288+0000 mgr.y (mgr.24416) 192 : cluster [DBG] pgmap v289: 164 pgs: 164 active+clean; 455 KiB data, 409 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:02.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:02 vm07 bash[28052]: cluster 2026-03-09T21:21:01.256073+0000 mon.a (mon.0) 1070 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-09T21:21:02.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:02 vm07 bash[28052]: cluster 2026-03-09T21:21:01.256073+0000 mon.a (mon.0) 1070 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-09T21:21:02.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:02 vm07 bash[28052]: cluster 2026-03-09T21:21:01.774288+0000 mgr.y (mgr.24416) 192 : cluster [DBG] pgmap v289: 164 pgs: 164 active+clean; 455 KiB data, 409 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:02.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:02 vm07 bash[28052]: cluster 2026-03-09T21:21:01.774288+0000 mgr.y (mgr.24416) 192 : cluster [DBG] pgmap v289: 164 pgs: 164 active+clean; 455 KiB data, 409 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:02.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:02 vm10 bash[23387]: cluster 2026-03-09T21:21:01.256073+0000 mon.a (mon.0) 1070 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-09T21:21:02.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:02 vm10 bash[23387]: cluster 2026-03-09T21:21:01.256073+0000 mon.a (mon.0) 1070 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-09T21:21:02.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:02 vm10 bash[23387]: cluster 2026-03-09T21:21:01.774288+0000 mgr.y (mgr.24416) 192 : cluster [DBG] pgmap v289: 164 pgs: 164 active+clean; 455 KiB data, 409 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:02.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:02 vm10 bash[23387]: cluster 2026-03-09T21:21:01.774288+0000 mgr.y (mgr.24416) 192 : cluster [DBG] pgmap v289: 164 pgs: 164 active+clean; 455 KiB data, 409 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:03.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:03 vm07 bash[20771]: cluster 2026-03-09T21:21:02.323370+0000 mon.a (mon.0) 1071 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-09T21:21:03.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:03 vm07 bash[20771]: cluster 2026-03-09T21:21:02.323370+0000 mon.a (mon.0) 1071 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-09T21:21:03.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:03 vm07 bash[28052]: cluster 2026-03-09T21:21:02.323370+0000 mon.a (mon.0) 1071 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-09T21:21:03.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:03 vm07 bash[28052]: cluster 2026-03-09T21:21:02.323370+0000 mon.a (mon.0) 1071 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-09T21:21:03.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:03 vm10 bash[23387]: cluster 2026-03-09T21:21:02.323370+0000 mon.a (mon.0) 1071 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-09T21:21:03.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:03 vm10 bash[23387]: cluster 2026-03-09T21:21:02.323370+0000 mon.a (mon.0) 1071 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-09T21:21:04.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:04 vm07 bash[20771]: cluster 2026-03-09T21:21:03.329459+0000 mon.a (mon.0) 1072 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-09T21:21:04.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:04 vm07 bash[20771]: cluster 2026-03-09T21:21:03.329459+0000 mon.a (mon.0) 1072 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-09T21:21:04.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:04 vm07 bash[20771]: audit 2026-03-09T21:21:03.370439+0000 mon.a (mon.0) 1073 : audit [INF] from='client.? 192.168.123.107:0/796210354' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:04.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:04 vm07 bash[20771]: audit 2026-03-09T21:21:03.370439+0000 mon.a (mon.0) 1073 : audit [INF] from='client.? 192.168.123.107:0/796210354' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:04.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:04 vm07 bash[20771]: cluster 2026-03-09T21:21:03.774566+0000 mgr.y (mgr.24416) 193 : cluster [DBG] pgmap v292: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 414 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:04.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:04 vm07 bash[20771]: cluster 2026-03-09T21:21:03.774566+0000 mgr.y (mgr.24416) 193 : cluster [DBG] pgmap v292: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 414 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:04.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:04 vm07 bash[28052]: cluster 2026-03-09T21:21:03.329459+0000 mon.a (mon.0) 1072 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-09T21:21:04.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:04 vm07 bash[28052]: cluster 2026-03-09T21:21:03.329459+0000 mon.a (mon.0) 1072 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-09T21:21:04.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:04 vm07 bash[28052]: audit 2026-03-09T21:21:03.370439+0000 mon.a (mon.0) 1073 : audit [INF] from='client.? 192.168.123.107:0/796210354' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:04.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:04 vm07 bash[28052]: audit 2026-03-09T21:21:03.370439+0000 mon.a (mon.0) 1073 : audit [INF] from='client.? 192.168.123.107:0/796210354' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:04.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:04 vm07 bash[28052]: cluster 2026-03-09T21:21:03.774566+0000 mgr.y (mgr.24416) 193 : cluster [DBG] pgmap v292: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 414 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:04.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:04 vm07 bash[28052]: cluster 2026-03-09T21:21:03.774566+0000 mgr.y (mgr.24416) 193 : cluster [DBG] pgmap v292: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 414 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:04.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:04 vm10 bash[23387]: cluster 2026-03-09T21:21:03.329459+0000 mon.a (mon.0) 1072 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-09T21:21:04.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:04 vm10 bash[23387]: cluster 2026-03-09T21:21:03.329459+0000 mon.a (mon.0) 1072 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-09T21:21:04.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:04 vm10 bash[23387]: audit 2026-03-09T21:21:03.370439+0000 mon.a (mon.0) 1073 : audit [INF] from='client.? 192.168.123.107:0/796210354' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:04.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:04 vm10 bash[23387]: audit 2026-03-09T21:21:03.370439+0000 mon.a (mon.0) 1073 : audit [INF] from='client.? 192.168.123.107:0/796210354' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:04.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:04 vm10 bash[23387]: cluster 2026-03-09T21:21:03.774566+0000 mgr.y (mgr.24416) 193 : cluster [DBG] pgmap v292: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 414 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:04.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:04 vm10 bash[23387]: cluster 2026-03-09T21:21:03.774566+0000 mgr.y (mgr.24416) 193 : cluster [DBG] pgmap v292: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 414 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:05.355 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_get_omap_vals_by_keys PASSED [ 57%] 2026-03-09T21:21:05.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:05 vm07 bash[20771]: audit 2026-03-09T21:21:04.341039+0000 mon.a (mon.0) 1074 : audit [INF] from='client.? 192.168.123.107:0/796210354' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:05.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:05 vm07 bash[20771]: audit 2026-03-09T21:21:04.341039+0000 mon.a (mon.0) 1074 : audit [INF] from='client.? 192.168.123.107:0/796210354' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:05.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:05 vm07 bash[20771]: cluster 2026-03-09T21:21:04.344224+0000 mon.a (mon.0) 1075 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-09T21:21:05.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:05 vm07 bash[20771]: cluster 2026-03-09T21:21:04.344224+0000 mon.a (mon.0) 1075 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-09T21:21:05.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:05 vm07 bash[28052]: audit 2026-03-09T21:21:04.341039+0000 mon.a (mon.0) 1074 : audit [INF] from='client.? 192.168.123.107:0/796210354' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:05.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:05 vm07 bash[28052]: audit 2026-03-09T21:21:04.341039+0000 mon.a (mon.0) 1074 : audit [INF] from='client.? 192.168.123.107:0/796210354' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:05.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:05 vm07 bash[28052]: cluster 2026-03-09T21:21:04.344224+0000 mon.a (mon.0) 1075 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-09T21:21:05.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:05 vm07 bash[28052]: cluster 2026-03-09T21:21:04.344224+0000 mon.a (mon.0) 1075 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-09T21:21:05.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:05 vm10 bash[23387]: audit 2026-03-09T21:21:04.341039+0000 mon.a (mon.0) 1074 : audit [INF] from='client.? 192.168.123.107:0/796210354' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:05.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:05 vm10 bash[23387]: audit 2026-03-09T21:21:04.341039+0000 mon.a (mon.0) 1074 : audit [INF] from='client.? 192.168.123.107:0/796210354' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:05.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:05 vm10 bash[23387]: cluster 2026-03-09T21:21:04.344224+0000 mon.a (mon.0) 1075 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-09T21:21:05.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:05 vm10 bash[23387]: cluster 2026-03-09T21:21:04.344224+0000 mon.a (mon.0) 1075 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-09T21:21:06.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:06 vm10 bash[23387]: cluster 2026-03-09T21:21:05.350685+0000 mon.a (mon.0) 1076 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-09T21:21:06.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:06 vm10 bash[23387]: cluster 2026-03-09T21:21:05.350685+0000 mon.a (mon.0) 1076 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-09T21:21:06.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:06 vm10 bash[23387]: cluster 2026-03-09T21:21:05.774821+0000 mgr.y (mgr.24416) 194 : cluster [DBG] pgmap v295: 164 pgs: 164 active+clean; 455 KiB data, 414 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:06.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:06 vm10 bash[23387]: cluster 2026-03-09T21:21:05.774821+0000 mgr.y (mgr.24416) 194 : cluster [DBG] pgmap v295: 164 pgs: 164 active+clean; 455 KiB data, 414 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:06.692 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:21:06 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:21:06.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:06 vm07 bash[20771]: cluster 2026-03-09T21:21:05.350685+0000 mon.a (mon.0) 1076 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-09T21:21:06.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:06 vm07 bash[20771]: cluster 2026-03-09T21:21:05.350685+0000 mon.a (mon.0) 1076 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-09T21:21:06.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:06 vm07 bash[20771]: cluster 2026-03-09T21:21:05.774821+0000 mgr.y (mgr.24416) 194 : cluster [DBG] pgmap v295: 164 pgs: 164 active+clean; 455 KiB data, 414 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:06.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:06 vm07 bash[20771]: cluster 2026-03-09T21:21:05.774821+0000 mgr.y (mgr.24416) 194 : cluster [DBG] pgmap v295: 164 pgs: 164 active+clean; 455 KiB data, 414 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:06.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:06 vm07 bash[28052]: cluster 2026-03-09T21:21:05.350685+0000 mon.a (mon.0) 1076 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-09T21:21:06.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:06 vm07 bash[28052]: cluster 2026-03-09T21:21:05.350685+0000 mon.a (mon.0) 1076 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-09T21:21:06.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:06 vm07 bash[28052]: cluster 2026-03-09T21:21:05.774821+0000 mgr.y (mgr.24416) 194 : cluster [DBG] pgmap v295: 164 pgs: 164 active+clean; 455 KiB data, 414 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:06.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:06 vm07 bash[28052]: cluster 2026-03-09T21:21:05.774821+0000 mgr.y (mgr.24416) 194 : cluster [DBG] pgmap v295: 164 pgs: 164 active+clean; 455 KiB data, 414 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:07.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:07 vm10 bash[23387]: cluster 2026-03-09T21:21:06.367691+0000 mon.a (mon.0) 1077 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-09T21:21:07.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:07 vm10 bash[23387]: cluster 2026-03-09T21:21:06.367691+0000 mon.a (mon.0) 1077 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-09T21:21:07.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:07 vm10 bash[23387]: audit 2026-03-09T21:21:06.382314+0000 mgr.y (mgr.24416) 195 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:07.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:07 vm10 bash[23387]: audit 2026-03-09T21:21:06.382314+0000 mgr.y (mgr.24416) 195 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:07.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:07 vm10 bash[23387]: cluster 2026-03-09T21:21:06.385612+0000 mon.a (mon.0) 1078 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:21:07.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:07 vm10 bash[23387]: cluster 2026-03-09T21:21:06.385612+0000 mon.a (mon.0) 1078 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:21:07.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:07 vm10 bash[23387]: cluster 2026-03-09T21:21:07.374181+0000 mon.a (mon.0) 1079 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-09T21:21:07.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:07 vm10 bash[23387]: cluster 2026-03-09T21:21:07.374181+0000 mon.a (mon.0) 1079 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-09T21:21:07.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:07 vm07 bash[28052]: cluster 2026-03-09T21:21:06.367691+0000 mon.a (mon.0) 1077 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-09T21:21:07.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:07 vm07 bash[28052]: cluster 2026-03-09T21:21:06.367691+0000 mon.a (mon.0) 1077 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-09T21:21:07.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:07 vm07 bash[28052]: audit 2026-03-09T21:21:06.382314+0000 mgr.y (mgr.24416) 195 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:07.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:07 vm07 bash[28052]: audit 2026-03-09T21:21:06.382314+0000 mgr.y (mgr.24416) 195 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:07.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:07 vm07 bash[28052]: cluster 2026-03-09T21:21:06.385612+0000 mon.a (mon.0) 1078 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:21:07.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:07 vm07 bash[28052]: cluster 2026-03-09T21:21:06.385612+0000 mon.a (mon.0) 1078 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:21:07.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:07 vm07 bash[28052]: cluster 2026-03-09T21:21:07.374181+0000 mon.a (mon.0) 1079 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-09T21:21:07.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:07 vm07 bash[28052]: cluster 2026-03-09T21:21:07.374181+0000 mon.a (mon.0) 1079 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-09T21:21:07.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:07 vm07 bash[20771]: cluster 2026-03-09T21:21:06.367691+0000 mon.a (mon.0) 1077 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-09T21:21:07.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:07 vm07 bash[20771]: cluster 2026-03-09T21:21:06.367691+0000 mon.a (mon.0) 1077 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-09T21:21:07.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:07 vm07 bash[20771]: audit 2026-03-09T21:21:06.382314+0000 mgr.y (mgr.24416) 195 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:07.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:07 vm07 bash[20771]: audit 2026-03-09T21:21:06.382314+0000 mgr.y (mgr.24416) 195 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:07.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:07 vm07 bash[20771]: cluster 2026-03-09T21:21:06.385612+0000 mon.a (mon.0) 1078 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:21:07.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:07 vm07 bash[20771]: cluster 2026-03-09T21:21:06.385612+0000 mon.a (mon.0) 1078 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:21:07.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:07 vm07 bash[20771]: cluster 2026-03-09T21:21:07.374181+0000 mon.a (mon.0) 1079 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-09T21:21:07.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:07 vm07 bash[20771]: cluster 2026-03-09T21:21:07.374181+0000 mon.a (mon.0) 1079 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-09T21:21:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:08 vm10 bash[23387]: audit 2026-03-09T21:21:07.422994+0000 mon.c (mon.2) 93 : audit [INF] from='client.? 192.168.123.107:0/616774703' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:08 vm10 bash[23387]: audit 2026-03-09T21:21:07.422994+0000 mon.c (mon.2) 93 : audit [INF] from='client.? 192.168.123.107:0/616774703' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:08 vm10 bash[23387]: audit 2026-03-09T21:21:07.423369+0000 mon.a (mon.0) 1080 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:08 vm10 bash[23387]: audit 2026-03-09T21:21:07.423369+0000 mon.a (mon.0) 1080 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:08 vm10 bash[23387]: cluster 2026-03-09T21:21:07.775794+0000 mgr.y (mgr.24416) 196 : cluster [DBG] pgmap v298: 196 pgs: 196 active+clean; 455 KiB data, 418 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:21:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:08 vm10 bash[23387]: cluster 2026-03-09T21:21:07.775794+0000 mgr.y (mgr.24416) 196 : cluster [DBG] pgmap v298: 196 pgs: 196 active+clean; 455 KiB data, 418 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:21:08.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:08 vm07 bash[20771]: audit 2026-03-09T21:21:07.422994+0000 mon.c (mon.2) 93 : audit [INF] from='client.? 192.168.123.107:0/616774703' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:08.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:08 vm07 bash[20771]: audit 2026-03-09T21:21:07.422994+0000 mon.c (mon.2) 93 : audit [INF] from='client.? 192.168.123.107:0/616774703' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:08.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:08 vm07 bash[20771]: audit 2026-03-09T21:21:07.423369+0000 mon.a (mon.0) 1080 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:08.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:08 vm07 bash[20771]: audit 2026-03-09T21:21:07.423369+0000 mon.a (mon.0) 1080 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:08.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:08 vm07 bash[20771]: cluster 2026-03-09T21:21:07.775794+0000 mgr.y (mgr.24416) 196 : cluster [DBG] pgmap v298: 196 pgs: 196 active+clean; 455 KiB data, 418 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:21:08.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:08 vm07 bash[20771]: cluster 2026-03-09T21:21:07.775794+0000 mgr.y (mgr.24416) 196 : cluster [DBG] pgmap v298: 196 pgs: 196 active+clean; 455 KiB data, 418 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:21:08.865 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:21:08 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:21:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:21:08.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:08 vm07 bash[28052]: audit 2026-03-09T21:21:07.422994+0000 mon.c (mon.2) 93 : audit [INF] from='client.? 192.168.123.107:0/616774703' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:08.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:08 vm07 bash[28052]: audit 2026-03-09T21:21:07.422994+0000 mon.c (mon.2) 93 : audit [INF] from='client.? 192.168.123.107:0/616774703' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:08.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:08 vm07 bash[28052]: audit 2026-03-09T21:21:07.423369+0000 mon.a (mon.0) 1080 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:08.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:08 vm07 bash[28052]: audit 2026-03-09T21:21:07.423369+0000 mon.a (mon.0) 1080 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:08.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:08 vm07 bash[28052]: cluster 2026-03-09T21:21:07.775794+0000 mgr.y (mgr.24416) 196 : cluster [DBG] pgmap v298: 196 pgs: 196 active+clean; 455 KiB data, 418 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:21:08.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:08 vm07 bash[28052]: cluster 2026-03-09T21:21:07.775794+0000 mgr.y (mgr.24416) 196 : cluster [DBG] pgmap v298: 196 pgs: 196 active+clean; 455 KiB data, 418 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:21:09.442 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_get_omap_keys PASSED [ 58%] 2026-03-09T21:21:09.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:09 vm07 bash[20771]: audit 2026-03-09T21:21:08.418460+0000 mon.a (mon.0) 1081 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:09.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:09 vm07 bash[20771]: audit 2026-03-09T21:21:08.418460+0000 mon.a (mon.0) 1081 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:09.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:09 vm07 bash[20771]: cluster 2026-03-09T21:21:08.422154+0000 mon.a (mon.0) 1082 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-09T21:21:09.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:09 vm07 bash[20771]: cluster 2026-03-09T21:21:08.422154+0000 mon.a (mon.0) 1082 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-09T21:21:09.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:09 vm07 bash[28052]: audit 2026-03-09T21:21:08.418460+0000 mon.a (mon.0) 1081 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:09.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:09 vm07 bash[28052]: audit 2026-03-09T21:21:08.418460+0000 mon.a (mon.0) 1081 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:09.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:09 vm07 bash[28052]: cluster 2026-03-09T21:21:08.422154+0000 mon.a (mon.0) 1082 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-09T21:21:09.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:09 vm07 bash[28052]: cluster 2026-03-09T21:21:08.422154+0000 mon.a (mon.0) 1082 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-09T21:21:09.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:09 vm10 bash[23387]: audit 2026-03-09T21:21:08.418460+0000 mon.a (mon.0) 1081 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:09.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:09 vm10 bash[23387]: audit 2026-03-09T21:21:08.418460+0000 mon.a (mon.0) 1081 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:09.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:09 vm10 bash[23387]: cluster 2026-03-09T21:21:08.422154+0000 mon.a (mon.0) 1082 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-09T21:21:09.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:09 vm10 bash[23387]: cluster 2026-03-09T21:21:08.422154+0000 mon.a (mon.0) 1082 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-09T21:21:10.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:10 vm07 bash[20771]: cluster 2026-03-09T21:21:09.436708+0000 mon.a (mon.0) 1083 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-09T21:21:10.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:10 vm07 bash[20771]: cluster 2026-03-09T21:21:09.436708+0000 mon.a (mon.0) 1083 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-09T21:21:10.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:10 vm07 bash[20771]: cluster 2026-03-09T21:21:09.776161+0000 mgr.y (mgr.24416) 197 : cluster [DBG] pgmap v301: 164 pgs: 164 active+clean; 455 KiB data, 418 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:10.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:10 vm07 bash[20771]: cluster 2026-03-09T21:21:09.776161+0000 mgr.y (mgr.24416) 197 : cluster [DBG] pgmap v301: 164 pgs: 164 active+clean; 455 KiB data, 418 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:10.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:10 vm07 bash[28052]: cluster 2026-03-09T21:21:09.436708+0000 mon.a (mon.0) 1083 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-09T21:21:10.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:10 vm07 bash[28052]: cluster 2026-03-09T21:21:09.436708+0000 mon.a (mon.0) 1083 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-09T21:21:10.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:10 vm07 bash[28052]: cluster 2026-03-09T21:21:09.776161+0000 mgr.y (mgr.24416) 197 : cluster [DBG] pgmap v301: 164 pgs: 164 active+clean; 455 KiB data, 418 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:10.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:10 vm07 bash[28052]: cluster 2026-03-09T21:21:09.776161+0000 mgr.y (mgr.24416) 197 : cluster [DBG] pgmap v301: 164 pgs: 164 active+clean; 455 KiB data, 418 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:10.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:10 vm10 bash[23387]: cluster 2026-03-09T21:21:09.436708+0000 mon.a (mon.0) 1083 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-09T21:21:10.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:10 vm10 bash[23387]: cluster 2026-03-09T21:21:09.436708+0000 mon.a (mon.0) 1083 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-09T21:21:10.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:10 vm10 bash[23387]: cluster 2026-03-09T21:21:09.776161+0000 mgr.y (mgr.24416) 197 : cluster [DBG] pgmap v301: 164 pgs: 164 active+clean; 455 KiB data, 418 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:10.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:10 vm10 bash[23387]: cluster 2026-03-09T21:21:09.776161+0000 mgr.y (mgr.24416) 197 : cluster [DBG] pgmap v301: 164 pgs: 164 active+clean; 455 KiB data, 418 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:11.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:11 vm10 bash[23387]: cluster 2026-03-09T21:21:10.479600+0000 mon.a (mon.0) 1084 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-09T21:21:11.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:11 vm10 bash[23387]: cluster 2026-03-09T21:21:10.479600+0000 mon.a (mon.0) 1084 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-09T21:21:12.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:11 vm07 bash[20771]: cluster 2026-03-09T21:21:10.479600+0000 mon.a (mon.0) 1084 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-09T21:21:12.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:11 vm07 bash[20771]: cluster 2026-03-09T21:21:10.479600+0000 mon.a (mon.0) 1084 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-09T21:21:12.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:11 vm07 bash[28052]: cluster 2026-03-09T21:21:10.479600+0000 mon.a (mon.0) 1084 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-09T21:21:12.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:11 vm07 bash[28052]: cluster 2026-03-09T21:21:10.479600+0000 mon.a (mon.0) 1084 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-09T21:21:12.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:12 vm10 bash[23387]: cluster 2026-03-09T21:21:11.483573+0000 mon.a (mon.0) 1085 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-09T21:21:12.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:12 vm10 bash[23387]: cluster 2026-03-09T21:21:11.483573+0000 mon.a (mon.0) 1085 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-09T21:21:12.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:12 vm10 bash[23387]: audit 2026-03-09T21:21:11.692369+0000 mon.a (mon.0) 1086 : audit [INF] from='client.? 192.168.123.107:0/1724159792' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:12.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:12 vm10 bash[23387]: audit 2026-03-09T21:21:11.692369+0000 mon.a (mon.0) 1086 : audit [INF] from='client.? 192.168.123.107:0/1724159792' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:12.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:12 vm10 bash[23387]: cluster 2026-03-09T21:21:11.776529+0000 mgr.y (mgr.24416) 198 : cluster [DBG] pgmap v304: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 418 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:12.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:12 vm10 bash[23387]: cluster 2026-03-09T21:21:11.776529+0000 mgr.y (mgr.24416) 198 : cluster [DBG] pgmap v304: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 418 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:12.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:12 vm10 bash[23387]: audit 2026-03-09T21:21:12.005576+0000 mon.c (mon.2) 94 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:21:12.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:12 vm10 bash[23387]: audit 2026-03-09T21:21:12.005576+0000 mon.c (mon.2) 94 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:21:12.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:12 vm10 bash[23387]: audit 2026-03-09T21:21:12.469386+0000 mon.a (mon.0) 1087 : audit [INF] from='client.? 192.168.123.107:0/1724159792' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:12.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:12 vm10 bash[23387]: audit 2026-03-09T21:21:12.469386+0000 mon.a (mon.0) 1087 : audit [INF] from='client.? 192.168.123.107:0/1724159792' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:12.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:12 vm10 bash[23387]: cluster 2026-03-09T21:21:12.476091+0000 mon.a (mon.0) 1088 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-09T21:21:12.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:12 vm10 bash[23387]: cluster 2026-03-09T21:21:12.476091+0000 mon.a (mon.0) 1088 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-09T21:21:13.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:12 vm07 bash[20771]: cluster 2026-03-09T21:21:11.483573+0000 mon.a (mon.0) 1085 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-09T21:21:13.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:12 vm07 bash[20771]: cluster 2026-03-09T21:21:11.483573+0000 mon.a (mon.0) 1085 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-09T21:21:13.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:12 vm07 bash[20771]: audit 2026-03-09T21:21:11.692369+0000 mon.a (mon.0) 1086 : audit [INF] from='client.? 192.168.123.107:0/1724159792' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:13.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:12 vm07 bash[20771]: audit 2026-03-09T21:21:11.692369+0000 mon.a (mon.0) 1086 : audit [INF] from='client.? 192.168.123.107:0/1724159792' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:13.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:12 vm07 bash[20771]: cluster 2026-03-09T21:21:11.776529+0000 mgr.y (mgr.24416) 198 : cluster [DBG] pgmap v304: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 418 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:13.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:12 vm07 bash[20771]: cluster 2026-03-09T21:21:11.776529+0000 mgr.y (mgr.24416) 198 : cluster [DBG] pgmap v304: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 418 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:13.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:12 vm07 bash[20771]: audit 2026-03-09T21:21:12.005576+0000 mon.c (mon.2) 94 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:21:13.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:12 vm07 bash[20771]: audit 2026-03-09T21:21:12.005576+0000 mon.c (mon.2) 94 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:21:13.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:12 vm07 bash[20771]: audit 2026-03-09T21:21:12.469386+0000 mon.a (mon.0) 1087 : audit [INF] from='client.? 192.168.123.107:0/1724159792' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:13.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:12 vm07 bash[20771]: audit 2026-03-09T21:21:12.469386+0000 mon.a (mon.0) 1087 : audit [INF] from='client.? 192.168.123.107:0/1724159792' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:13.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:12 vm07 bash[20771]: cluster 2026-03-09T21:21:12.476091+0000 mon.a (mon.0) 1088 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-09T21:21:13.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:12 vm07 bash[20771]: cluster 2026-03-09T21:21:12.476091+0000 mon.a (mon.0) 1088 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-09T21:21:13.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:12 vm07 bash[28052]: cluster 2026-03-09T21:21:11.483573+0000 mon.a (mon.0) 1085 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-09T21:21:13.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:12 vm07 bash[28052]: cluster 2026-03-09T21:21:11.483573+0000 mon.a (mon.0) 1085 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-09T21:21:13.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:12 vm07 bash[28052]: audit 2026-03-09T21:21:11.692369+0000 mon.a (mon.0) 1086 : audit [INF] from='client.? 192.168.123.107:0/1724159792' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:13.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:12 vm07 bash[28052]: audit 2026-03-09T21:21:11.692369+0000 mon.a (mon.0) 1086 : audit [INF] from='client.? 192.168.123.107:0/1724159792' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:13.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:12 vm07 bash[28052]: cluster 2026-03-09T21:21:11.776529+0000 mgr.y (mgr.24416) 198 : cluster [DBG] pgmap v304: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 418 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:13.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:12 vm07 bash[28052]: cluster 2026-03-09T21:21:11.776529+0000 mgr.y (mgr.24416) 198 : cluster [DBG] pgmap v304: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 418 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:13.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:12 vm07 bash[28052]: audit 2026-03-09T21:21:12.005576+0000 mon.c (mon.2) 94 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:21:13.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:12 vm07 bash[28052]: audit 2026-03-09T21:21:12.005576+0000 mon.c (mon.2) 94 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:21:13.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:12 vm07 bash[28052]: audit 2026-03-09T21:21:12.469386+0000 mon.a (mon.0) 1087 : audit [INF] from='client.? 192.168.123.107:0/1724159792' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:13.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:12 vm07 bash[28052]: audit 2026-03-09T21:21:12.469386+0000 mon.a (mon.0) 1087 : audit [INF] from='client.? 192.168.123.107:0/1724159792' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:13.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:12 vm07 bash[28052]: cluster 2026-03-09T21:21:12.476091+0000 mon.a (mon.0) 1088 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-09T21:21:13.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:12 vm07 bash[28052]: cluster 2026-03-09T21:21:12.476091+0000 mon.a (mon.0) 1088 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-09T21:21:13.508 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_clear_omap PASSED [ 59%] 2026-03-09T21:21:13.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:13 vm10 bash[23387]: cluster 2026-03-09T21:21:12.666640+0000 mon.a (mon.0) 1089 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:21:13.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:13 vm10 bash[23387]: cluster 2026-03-09T21:21:12.666640+0000 mon.a (mon.0) 1089 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:21:13.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:13 vm10 bash[23387]: cluster 2026-03-09T21:21:13.506803+0000 mon.a (mon.0) 1090 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-09T21:21:13.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:13 vm10 bash[23387]: cluster 2026-03-09T21:21:13.506803+0000 mon.a (mon.0) 1090 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-09T21:21:14.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:13 vm07 bash[20771]: cluster 2026-03-09T21:21:12.666640+0000 mon.a (mon.0) 1089 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:21:14.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:13 vm07 bash[20771]: cluster 2026-03-09T21:21:12.666640+0000 mon.a (mon.0) 1089 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:21:14.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:13 vm07 bash[20771]: cluster 2026-03-09T21:21:13.506803+0000 mon.a (mon.0) 1090 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-09T21:21:14.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:13 vm07 bash[20771]: cluster 2026-03-09T21:21:13.506803+0000 mon.a (mon.0) 1090 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-09T21:21:14.073 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:13 vm07 bash[28052]: cluster 2026-03-09T21:21:12.666640+0000 mon.a (mon.0) 1089 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:21:14.073 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:13 vm07 bash[28052]: cluster 2026-03-09T21:21:12.666640+0000 mon.a (mon.0) 1089 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:21:14.073 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:13 vm07 bash[28052]: cluster 2026-03-09T21:21:13.506803+0000 mon.a (mon.0) 1090 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-09T21:21:14.073 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:13 vm07 bash[28052]: cluster 2026-03-09T21:21:13.506803+0000 mon.a (mon.0) 1090 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-09T21:21:15.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:14 vm07 bash[20771]: cluster 2026-03-09T21:21:13.776918+0000 mgr.y (mgr.24416) 199 : cluster [DBG] pgmap v307: 164 pgs: 164 active+clean; 455 KiB data, 427 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:15.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:14 vm07 bash[20771]: cluster 2026-03-09T21:21:13.776918+0000 mgr.y (mgr.24416) 199 : cluster [DBG] pgmap v307: 164 pgs: 164 active+clean; 455 KiB data, 427 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:15.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:14 vm07 bash[20771]: cluster 2026-03-09T21:21:14.505423+0000 mon.a (mon.0) 1091 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-09T21:21:15.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:14 vm07 bash[20771]: cluster 2026-03-09T21:21:14.505423+0000 mon.a (mon.0) 1091 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-09T21:21:15.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:14 vm07 bash[28052]: cluster 2026-03-09T21:21:13.776918+0000 mgr.y (mgr.24416) 199 : cluster [DBG] pgmap v307: 164 pgs: 164 active+clean; 455 KiB data, 427 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:15.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:14 vm07 bash[28052]: cluster 2026-03-09T21:21:13.776918+0000 mgr.y (mgr.24416) 199 : cluster [DBG] pgmap v307: 164 pgs: 164 active+clean; 455 KiB data, 427 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:15.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:14 vm07 bash[28052]: cluster 2026-03-09T21:21:14.505423+0000 mon.a (mon.0) 1091 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-09T21:21:15.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:14 vm07 bash[28052]: cluster 2026-03-09T21:21:14.505423+0000 mon.a (mon.0) 1091 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-09T21:21:15.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:14 vm10 bash[23387]: cluster 2026-03-09T21:21:13.776918+0000 mgr.y (mgr.24416) 199 : cluster [DBG] pgmap v307: 164 pgs: 164 active+clean; 455 KiB data, 427 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:15.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:14 vm10 bash[23387]: cluster 2026-03-09T21:21:13.776918+0000 mgr.y (mgr.24416) 199 : cluster [DBG] pgmap v307: 164 pgs: 164 active+clean; 455 KiB data, 427 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:15.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:14 vm10 bash[23387]: cluster 2026-03-09T21:21:14.505423+0000 mon.a (mon.0) 1091 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-09T21:21:15.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:14 vm10 bash[23387]: cluster 2026-03-09T21:21:14.505423+0000 mon.a (mon.0) 1091 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-09T21:21:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:16 vm10 bash[23387]: cluster 2026-03-09T21:21:15.510269+0000 mon.a (mon.0) 1092 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-09T21:21:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:16 vm10 bash[23387]: cluster 2026-03-09T21:21:15.510269+0000 mon.a (mon.0) 1092 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-09T21:21:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:16 vm10 bash[23387]: audit 2026-03-09T21:21:15.552807+0000 mon.c (mon.2) 95 : audit [INF] from='client.? 192.168.123.107:0/4284714239' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:16 vm10 bash[23387]: audit 2026-03-09T21:21:15.552807+0000 mon.c (mon.2) 95 : audit [INF] from='client.? 192.168.123.107:0/4284714239' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:16 vm10 bash[23387]: audit 2026-03-09T21:21:15.553276+0000 mon.a (mon.0) 1093 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:16 vm10 bash[23387]: audit 2026-03-09T21:21:15.553276+0000 mon.a (mon.0) 1093 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:16 vm10 bash[23387]: cluster 2026-03-09T21:21:15.777231+0000 mgr.y (mgr.24416) 200 : cluster [DBG] pgmap v310: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 427 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:16 vm10 bash[23387]: cluster 2026-03-09T21:21:15.777231+0000 mgr.y (mgr.24416) 200 : cluster [DBG] pgmap v310: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 427 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:16.692 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:21:16 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:21:16.864 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:16 vm07 bash[20771]: cluster 2026-03-09T21:21:15.510269+0000 mon.a (mon.0) 1092 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-09T21:21:16.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:16 vm07 bash[20771]: cluster 2026-03-09T21:21:15.510269+0000 mon.a (mon.0) 1092 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-09T21:21:16.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:16 vm07 bash[20771]: audit 2026-03-09T21:21:15.552807+0000 mon.c (mon.2) 95 : audit [INF] from='client.? 192.168.123.107:0/4284714239' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:16.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:16 vm07 bash[20771]: audit 2026-03-09T21:21:15.552807+0000 mon.c (mon.2) 95 : audit [INF] from='client.? 192.168.123.107:0/4284714239' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:16.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:16 vm07 bash[20771]: audit 2026-03-09T21:21:15.553276+0000 mon.a (mon.0) 1093 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:16.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:16 vm07 bash[20771]: audit 2026-03-09T21:21:15.553276+0000 mon.a (mon.0) 1093 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:16.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:16 vm07 bash[20771]: cluster 2026-03-09T21:21:15.777231+0000 mgr.y (mgr.24416) 200 : cluster [DBG] pgmap v310: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 427 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:16.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:16 vm07 bash[20771]: cluster 2026-03-09T21:21:15.777231+0000 mgr.y (mgr.24416) 200 : cluster [DBG] pgmap v310: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 427 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:16.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:16 vm07 bash[28052]: cluster 2026-03-09T21:21:15.510269+0000 mon.a (mon.0) 1092 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-09T21:21:16.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:16 vm07 bash[28052]: cluster 2026-03-09T21:21:15.510269+0000 mon.a (mon.0) 1092 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-09T21:21:16.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:16 vm07 bash[28052]: audit 2026-03-09T21:21:15.552807+0000 mon.c (mon.2) 95 : audit [INF] from='client.? 192.168.123.107:0/4284714239' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:16.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:16 vm07 bash[28052]: audit 2026-03-09T21:21:15.552807+0000 mon.c (mon.2) 95 : audit [INF] from='client.? 192.168.123.107:0/4284714239' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:16.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:16 vm07 bash[28052]: audit 2026-03-09T21:21:15.553276+0000 mon.a (mon.0) 1093 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:16.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:16 vm07 bash[28052]: audit 2026-03-09T21:21:15.553276+0000 mon.a (mon.0) 1093 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:16.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:16 vm07 bash[28052]: cluster 2026-03-09T21:21:15.777231+0000 mgr.y (mgr.24416) 200 : cluster [DBG] pgmap v310: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 427 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:16.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:16 vm07 bash[28052]: cluster 2026-03-09T21:21:15.777231+0000 mgr.y (mgr.24416) 200 : cluster [DBG] pgmap v310: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 427 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:17.522 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_remove_omap_range2 PASSED [ 60%] 2026-03-09T21:21:17.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:17 vm07 bash[20771]: audit 2026-03-09T21:21:16.392964+0000 mgr.y (mgr.24416) 201 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:17.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:17 vm07 bash[20771]: audit 2026-03-09T21:21:16.392964+0000 mgr.y (mgr.24416) 201 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:17.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:17 vm07 bash[20771]: audit 2026-03-09T21:21:16.510642+0000 mon.a (mon.0) 1094 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:17.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:17 vm07 bash[20771]: audit 2026-03-09T21:21:16.510642+0000 mon.a (mon.0) 1094 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:17.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:17 vm07 bash[20771]: cluster 2026-03-09T21:21:16.514859+0000 mon.a (mon.0) 1095 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-09T21:21:17.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:17 vm07 bash[20771]: cluster 2026-03-09T21:21:16.514859+0000 mon.a (mon.0) 1095 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-09T21:21:17.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:17 vm07 bash[28052]: audit 2026-03-09T21:21:16.392964+0000 mgr.y (mgr.24416) 201 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:17.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:17 vm07 bash[28052]: audit 2026-03-09T21:21:16.392964+0000 mgr.y (mgr.24416) 201 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:17.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:17 vm07 bash[28052]: audit 2026-03-09T21:21:16.510642+0000 mon.a (mon.0) 1094 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:17.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:17 vm07 bash[28052]: audit 2026-03-09T21:21:16.510642+0000 mon.a (mon.0) 1094 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:17.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:17 vm07 bash[28052]: cluster 2026-03-09T21:21:16.514859+0000 mon.a (mon.0) 1095 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-09T21:21:17.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:17 vm07 bash[28052]: cluster 2026-03-09T21:21:16.514859+0000 mon.a (mon.0) 1095 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-09T21:21:17.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:17 vm10 bash[23387]: audit 2026-03-09T21:21:16.392964+0000 mgr.y (mgr.24416) 201 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:17.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:17 vm10 bash[23387]: audit 2026-03-09T21:21:16.392964+0000 mgr.y (mgr.24416) 201 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:17.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:17 vm10 bash[23387]: audit 2026-03-09T21:21:16.510642+0000 mon.a (mon.0) 1094 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:17.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:17 vm10 bash[23387]: audit 2026-03-09T21:21:16.510642+0000 mon.a (mon.0) 1094 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:17.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:17 vm10 bash[23387]: cluster 2026-03-09T21:21:16.514859+0000 mon.a (mon.0) 1095 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-09T21:21:17.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:17 vm10 bash[23387]: cluster 2026-03-09T21:21:16.514859+0000 mon.a (mon.0) 1095 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-09T21:21:18.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:18 vm07 bash[20771]: cluster 2026-03-09T21:21:17.517030+0000 mon.a (mon.0) 1096 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-09T21:21:18.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:18 vm07 bash[20771]: cluster 2026-03-09T21:21:17.517030+0000 mon.a (mon.0) 1096 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-09T21:21:18.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:18 vm07 bash[20771]: cluster 2026-03-09T21:21:17.777821+0000 mgr.y (mgr.24416) 202 : cluster [DBG] pgmap v313: 164 pgs: 164 active+clean; 455 KiB data, 435 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:18.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:18 vm07 bash[20771]: cluster 2026-03-09T21:21:17.777821+0000 mgr.y (mgr.24416) 202 : cluster [DBG] pgmap v313: 164 pgs: 164 active+clean; 455 KiB data, 435 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:18.865 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:21:18 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:21:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:21:18.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:18 vm07 bash[28052]: cluster 2026-03-09T21:21:17.517030+0000 mon.a (mon.0) 1096 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-09T21:21:18.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:18 vm07 bash[28052]: cluster 2026-03-09T21:21:17.517030+0000 mon.a (mon.0) 1096 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-09T21:21:18.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:18 vm07 bash[28052]: cluster 2026-03-09T21:21:17.777821+0000 mgr.y (mgr.24416) 202 : cluster [DBG] pgmap v313: 164 pgs: 164 active+clean; 455 KiB data, 435 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:18.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:18 vm07 bash[28052]: cluster 2026-03-09T21:21:17.777821+0000 mgr.y (mgr.24416) 202 : cluster [DBG] pgmap v313: 164 pgs: 164 active+clean; 455 KiB data, 435 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:18.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:18 vm10 bash[23387]: cluster 2026-03-09T21:21:17.517030+0000 mon.a (mon.0) 1096 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-09T21:21:18.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:18 vm10 bash[23387]: cluster 2026-03-09T21:21:17.517030+0000 mon.a (mon.0) 1096 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-09T21:21:18.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:18 vm10 bash[23387]: cluster 2026-03-09T21:21:17.777821+0000 mgr.y (mgr.24416) 202 : cluster [DBG] pgmap v313: 164 pgs: 164 active+clean; 455 KiB data, 435 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:18.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:18 vm10 bash[23387]: cluster 2026-03-09T21:21:17.777821+0000 mgr.y (mgr.24416) 202 : cluster [DBG] pgmap v313: 164 pgs: 164 active+clean; 455 KiB data, 435 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:19.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:19 vm07 bash[20771]: cluster 2026-03-09T21:21:18.532425+0000 mon.a (mon.0) 1097 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:21:19.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:19 vm07 bash[20771]: cluster 2026-03-09T21:21:18.532425+0000 mon.a (mon.0) 1097 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:21:19.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:19 vm07 bash[20771]: cluster 2026-03-09T21:21:18.546621+0000 mon.a (mon.0) 1098 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-09T21:21:19.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:19 vm07 bash[20771]: cluster 2026-03-09T21:21:18.546621+0000 mon.a (mon.0) 1098 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-09T21:21:19.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:19 vm07 bash[28052]: cluster 2026-03-09T21:21:18.532425+0000 mon.a (mon.0) 1097 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:21:19.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:19 vm07 bash[28052]: cluster 2026-03-09T21:21:18.532425+0000 mon.a (mon.0) 1097 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:21:19.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:19 vm07 bash[28052]: cluster 2026-03-09T21:21:18.546621+0000 mon.a (mon.0) 1098 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-09T21:21:19.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:19 vm07 bash[28052]: cluster 2026-03-09T21:21:18.546621+0000 mon.a (mon.0) 1098 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-09T21:21:19.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:19 vm10 bash[23387]: cluster 2026-03-09T21:21:18.532425+0000 mon.a (mon.0) 1097 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:21:19.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:19 vm10 bash[23387]: cluster 2026-03-09T21:21:18.532425+0000 mon.a (mon.0) 1097 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:21:19.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:19 vm10 bash[23387]: cluster 2026-03-09T21:21:18.546621+0000 mon.a (mon.0) 1098 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-09T21:21:19.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:19 vm10 bash[23387]: cluster 2026-03-09T21:21:18.546621+0000 mon.a (mon.0) 1098 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-09T21:21:20.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:20 vm07 bash[20771]: cluster 2026-03-09T21:21:19.554470+0000 mon.a (mon.0) 1099 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-09T21:21:20.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:20 vm07 bash[20771]: cluster 2026-03-09T21:21:19.554470+0000 mon.a (mon.0) 1099 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-09T21:21:20.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:20 vm07 bash[20771]: audit 2026-03-09T21:21:19.598479+0000 mon.c (mon.2) 96 : audit [INF] from='client.? 192.168.123.107:0/1232683751' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:20.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:20 vm07 bash[20771]: audit 2026-03-09T21:21:19.598479+0000 mon.c (mon.2) 96 : audit [INF] from='client.? 192.168.123.107:0/1232683751' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:20.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:20 vm07 bash[20771]: audit 2026-03-09T21:21:19.598723+0000 mon.a (mon.0) 1100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:20.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:20 vm07 bash[20771]: audit 2026-03-09T21:21:19.598723+0000 mon.a (mon.0) 1100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:20.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:20 vm07 bash[20771]: cluster 2026-03-09T21:21:19.778223+0000 mgr.y (mgr.24416) 203 : cluster [DBG] pgmap v316: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 435 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:20.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:20 vm07 bash[20771]: cluster 2026-03-09T21:21:19.778223+0000 mgr.y (mgr.24416) 203 : cluster [DBG] pgmap v316: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 435 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:20.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:20 vm07 bash[28052]: cluster 2026-03-09T21:21:19.554470+0000 mon.a (mon.0) 1099 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-09T21:21:20.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:20 vm07 bash[28052]: cluster 2026-03-09T21:21:19.554470+0000 mon.a (mon.0) 1099 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-09T21:21:20.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:20 vm07 bash[28052]: audit 2026-03-09T21:21:19.598479+0000 mon.c (mon.2) 96 : audit [INF] from='client.? 192.168.123.107:0/1232683751' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:20.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:20 vm07 bash[28052]: audit 2026-03-09T21:21:19.598479+0000 mon.c (mon.2) 96 : audit [INF] from='client.? 192.168.123.107:0/1232683751' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:20.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:20 vm07 bash[28052]: audit 2026-03-09T21:21:19.598723+0000 mon.a (mon.0) 1100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:20.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:20 vm07 bash[28052]: audit 2026-03-09T21:21:19.598723+0000 mon.a (mon.0) 1100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:20.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:20 vm07 bash[28052]: cluster 2026-03-09T21:21:19.778223+0000 mgr.y (mgr.24416) 203 : cluster [DBG] pgmap v316: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 435 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:20.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:20 vm07 bash[28052]: cluster 2026-03-09T21:21:19.778223+0000 mgr.y (mgr.24416) 203 : cluster [DBG] pgmap v316: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 435 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:20.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:20 vm10 bash[23387]: cluster 2026-03-09T21:21:19.554470+0000 mon.a (mon.0) 1099 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-09T21:21:20.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:20 vm10 bash[23387]: cluster 2026-03-09T21:21:19.554470+0000 mon.a (mon.0) 1099 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-09T21:21:20.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:20 vm10 bash[23387]: audit 2026-03-09T21:21:19.598479+0000 mon.c (mon.2) 96 : audit [INF] from='client.? 192.168.123.107:0/1232683751' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:20.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:20 vm10 bash[23387]: audit 2026-03-09T21:21:19.598479+0000 mon.c (mon.2) 96 : audit [INF] from='client.? 192.168.123.107:0/1232683751' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:20.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:20 vm10 bash[23387]: audit 2026-03-09T21:21:19.598723+0000 mon.a (mon.0) 1100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:20.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:20 vm10 bash[23387]: audit 2026-03-09T21:21:19.598723+0000 mon.a (mon.0) 1100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:20.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:20 vm10 bash[23387]: cluster 2026-03-09T21:21:19.778223+0000 mgr.y (mgr.24416) 203 : cluster [DBG] pgmap v316: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 435 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:20.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:20 vm10 bash[23387]: cluster 2026-03-09T21:21:19.778223+0000 mgr.y (mgr.24416) 203 : cluster [DBG] pgmap v316: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 435 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:21.768 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_omap_cmp PASSED [ 61%] 2026-03-09T21:21:22.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:21 vm07 bash[20771]: audit 2026-03-09T21:21:20.555261+0000 mon.a (mon.0) 1101 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:22.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:21 vm07 bash[20771]: audit 2026-03-09T21:21:20.555261+0000 mon.a (mon.0) 1101 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:22.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:21 vm07 bash[20771]: cluster 2026-03-09T21:21:20.576415+0000 mon.a (mon.0) 1102 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-09T21:21:22.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:21 vm07 bash[20771]: cluster 2026-03-09T21:21:20.576415+0000 mon.a (mon.0) 1102 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-09T21:21:22.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:21 vm07 bash[28052]: audit 2026-03-09T21:21:20.555261+0000 mon.a (mon.0) 1101 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:22.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:21 vm07 bash[28052]: audit 2026-03-09T21:21:20.555261+0000 mon.a (mon.0) 1101 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:22.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:21 vm07 bash[28052]: cluster 2026-03-09T21:21:20.576415+0000 mon.a (mon.0) 1102 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-09T21:21:22.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:21 vm07 bash[28052]: cluster 2026-03-09T21:21:20.576415+0000 mon.a (mon.0) 1102 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-09T21:21:22.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:21 vm10 bash[23387]: audit 2026-03-09T21:21:20.555261+0000 mon.a (mon.0) 1101 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:22.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:21 vm10 bash[23387]: audit 2026-03-09T21:21:20.555261+0000 mon.a (mon.0) 1101 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:22.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:21 vm10 bash[23387]: cluster 2026-03-09T21:21:20.576415+0000 mon.a (mon.0) 1102 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-09T21:21:22.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:21 vm10 bash[23387]: cluster 2026-03-09T21:21:20.576415+0000 mon.a (mon.0) 1102 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-09T21:21:23.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:22 vm07 bash[20771]: cluster 2026-03-09T21:21:21.750248+0000 mon.a (mon.0) 1103 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-09T21:21:23.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:22 vm07 bash[20771]: cluster 2026-03-09T21:21:21.750248+0000 mon.a (mon.0) 1103 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-09T21:21:23.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:22 vm07 bash[20771]: cluster 2026-03-09T21:21:21.778493+0000 mgr.y (mgr.24416) 204 : cluster [DBG] pgmap v319: 164 pgs: 164 active+clean; 455 KiB data, 435 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:23.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:22 vm07 bash[20771]: cluster 2026-03-09T21:21:21.778493+0000 mgr.y (mgr.24416) 204 : cluster [DBG] pgmap v319: 164 pgs: 164 active+clean; 455 KiB data, 435 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:23.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:22 vm07 bash[28052]: cluster 2026-03-09T21:21:21.750248+0000 mon.a (mon.0) 1103 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-09T21:21:23.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:22 vm07 bash[28052]: cluster 2026-03-09T21:21:21.750248+0000 mon.a (mon.0) 1103 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-09T21:21:23.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:22 vm07 bash[28052]: cluster 2026-03-09T21:21:21.778493+0000 mgr.y (mgr.24416) 204 : cluster [DBG] pgmap v319: 164 pgs: 164 active+clean; 455 KiB data, 435 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:23.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:22 vm07 bash[28052]: cluster 2026-03-09T21:21:21.778493+0000 mgr.y (mgr.24416) 204 : cluster [DBG] pgmap v319: 164 pgs: 164 active+clean; 455 KiB data, 435 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:23.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:22 vm10 bash[23387]: cluster 2026-03-09T21:21:21.750248+0000 mon.a (mon.0) 1103 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-09T21:21:23.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:22 vm10 bash[23387]: cluster 2026-03-09T21:21:21.750248+0000 mon.a (mon.0) 1103 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-09T21:21:23.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:22 vm10 bash[23387]: cluster 2026-03-09T21:21:21.778493+0000 mgr.y (mgr.24416) 204 : cluster [DBG] pgmap v319: 164 pgs: 164 active+clean; 455 KiB data, 435 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:23.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:22 vm10 bash[23387]: cluster 2026-03-09T21:21:21.778493+0000 mgr.y (mgr.24416) 204 : cluster [DBG] pgmap v319: 164 pgs: 164 active+clean; 455 KiB data, 435 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:24.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:23 vm07 bash[20771]: cluster 2026-03-09T21:21:22.794995+0000 mon.a (mon.0) 1104 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-09T21:21:24.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:23 vm07 bash[20771]: cluster 2026-03-09T21:21:22.794995+0000 mon.a (mon.0) 1104 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-09T21:21:24.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:23 vm07 bash[28052]: cluster 2026-03-09T21:21:22.794995+0000 mon.a (mon.0) 1104 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-09T21:21:24.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:23 vm07 bash[28052]: cluster 2026-03-09T21:21:22.794995+0000 mon.a (mon.0) 1104 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-09T21:21:24.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:23 vm10 bash[23387]: cluster 2026-03-09T21:21:22.794995+0000 mon.a (mon.0) 1104 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-09T21:21:24.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:23 vm10 bash[23387]: cluster 2026-03-09T21:21:22.794995+0000 mon.a (mon.0) 1104 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-09T21:21:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:24 vm10 bash[23387]: cluster 2026-03-09T21:21:23.778810+0000 mgr.y (mgr.24416) 205 : cluster [DBG] pgmap v321: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:24 vm10 bash[23387]: cluster 2026-03-09T21:21:23.778810+0000 mgr.y (mgr.24416) 205 : cluster [DBG] pgmap v321: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:24 vm10 bash[23387]: cluster 2026-03-09T21:21:23.790157+0000 mon.a (mon.0) 1105 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-09T21:21:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:24 vm10 bash[23387]: cluster 2026-03-09T21:21:23.790157+0000 mon.a (mon.0) 1105 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-09T21:21:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:24 vm10 bash[23387]: audit 2026-03-09T21:21:23.826884+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.107:0/918739953' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:24 vm10 bash[23387]: audit 2026-03-09T21:21:23.826884+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.107:0/918739953' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:24 vm10 bash[23387]: audit 2026-03-09T21:21:23.827402+0000 mon.a (mon.0) 1106 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:24 vm10 bash[23387]: audit 2026-03-09T21:21:23.827402+0000 mon.a (mon.0) 1106 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:24 vm10 bash[23387]: cluster 2026-03-09T21:21:23.829639+0000 mon.a (mon.0) 1107 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:21:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:24 vm10 bash[23387]: cluster 2026-03-09T21:21:23.829639+0000 mon.a (mon.0) 1107 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:21:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:24 vm10 bash[23387]: audit 2026-03-09T21:21:24.756825+0000 mon.c (mon.2) 97 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:21:25.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:24 vm10 bash[23387]: audit 2026-03-09T21:21:24.756825+0000 mon.c (mon.2) 97 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:21:25.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:24 vm07 bash[20771]: cluster 2026-03-09T21:21:23.778810+0000 mgr.y (mgr.24416) 205 : cluster [DBG] pgmap v321: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:25.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:24 vm07 bash[20771]: cluster 2026-03-09T21:21:23.778810+0000 mgr.y (mgr.24416) 205 : cluster [DBG] pgmap v321: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:25.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:24 vm07 bash[20771]: cluster 2026-03-09T21:21:23.790157+0000 mon.a (mon.0) 1105 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-09T21:21:25.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:24 vm07 bash[20771]: cluster 2026-03-09T21:21:23.790157+0000 mon.a (mon.0) 1105 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-09T21:21:25.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:24 vm07 bash[20771]: audit 2026-03-09T21:21:23.826884+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.107:0/918739953' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:25.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:24 vm07 bash[20771]: audit 2026-03-09T21:21:23.826884+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.107:0/918739953' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:25.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:24 vm07 bash[20771]: audit 2026-03-09T21:21:23.827402+0000 mon.a (mon.0) 1106 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:25.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:24 vm07 bash[20771]: audit 2026-03-09T21:21:23.827402+0000 mon.a (mon.0) 1106 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:25.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:24 vm07 bash[20771]: cluster 2026-03-09T21:21:23.829639+0000 mon.a (mon.0) 1107 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:21:25.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:24 vm07 bash[20771]: cluster 2026-03-09T21:21:23.829639+0000 mon.a (mon.0) 1107 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:21:25.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:24 vm07 bash[20771]: audit 2026-03-09T21:21:24.756825+0000 mon.c (mon.2) 97 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:21:25.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:24 vm07 bash[20771]: audit 2026-03-09T21:21:24.756825+0000 mon.c (mon.2) 97 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:21:25.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:24 vm07 bash[28052]: cluster 2026-03-09T21:21:23.778810+0000 mgr.y (mgr.24416) 205 : cluster [DBG] pgmap v321: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:25.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:24 vm07 bash[28052]: cluster 2026-03-09T21:21:23.778810+0000 mgr.y (mgr.24416) 205 : cluster [DBG] pgmap v321: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:25.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:24 vm07 bash[28052]: cluster 2026-03-09T21:21:23.790157+0000 mon.a (mon.0) 1105 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-09T21:21:25.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:24 vm07 bash[28052]: cluster 2026-03-09T21:21:23.790157+0000 mon.a (mon.0) 1105 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-09T21:21:25.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:24 vm07 bash[28052]: audit 2026-03-09T21:21:23.826884+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.107:0/918739953' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:25.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:24 vm07 bash[28052]: audit 2026-03-09T21:21:23.826884+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.107:0/918739953' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:25.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:24 vm07 bash[28052]: audit 2026-03-09T21:21:23.827402+0000 mon.a (mon.0) 1106 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:25.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:24 vm07 bash[28052]: audit 2026-03-09T21:21:23.827402+0000 mon.a (mon.0) 1106 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:25.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:24 vm07 bash[28052]: cluster 2026-03-09T21:21:23.829639+0000 mon.a (mon.0) 1107 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:21:25.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:24 vm07 bash[28052]: cluster 2026-03-09T21:21:23.829639+0000 mon.a (mon.0) 1107 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:21:25.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:24 vm07 bash[28052]: audit 2026-03-09T21:21:24.756825+0000 mon.c (mon.2) 97 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:21:25.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:24 vm07 bash[28052]: audit 2026-03-09T21:21:24.756825+0000 mon.c (mon.2) 97 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:21:25.921 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_cmpext_op PASSED [ 62%] 2026-03-09T21:21:26.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:25 vm07 bash[20771]: audit 2026-03-09T21:21:24.905350+0000 mon.a (mon.0) 1108 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:26.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:25 vm07 bash[20771]: audit 2026-03-09T21:21:24.905350+0000 mon.a (mon.0) 1108 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:26.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:25 vm07 bash[20771]: cluster 2026-03-09T21:21:24.912538+0000 mon.a (mon.0) 1109 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-09T21:21:26.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:25 vm07 bash[20771]: cluster 2026-03-09T21:21:24.912538+0000 mon.a (mon.0) 1109 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-09T21:21:26.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:25 vm07 bash[20771]: audit 2026-03-09T21:21:25.142789+0000 mon.a (mon.0) 1110 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:21:26.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:25 vm07 bash[20771]: audit 2026-03-09T21:21:25.142789+0000 mon.a (mon.0) 1110 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:21:26.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:25 vm07 bash[20771]: audit 2026-03-09T21:21:25.150109+0000 mon.a (mon.0) 1111 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:21:26.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:25 vm07 bash[20771]: audit 2026-03-09T21:21:25.150109+0000 mon.a (mon.0) 1111 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:21:26.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:25 vm07 bash[20771]: audit 2026-03-09T21:21:25.484870+0000 mon.c (mon.2) 98 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:21:26.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:25 vm07 bash[20771]: audit 2026-03-09T21:21:25.484870+0000 mon.c (mon.2) 98 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:21:26.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:25 vm07 bash[20771]: audit 2026-03-09T21:21:25.485635+0000 mon.c (mon.2) 99 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:21:26.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:25 vm07 bash[20771]: audit 2026-03-09T21:21:25.485635+0000 mon.c (mon.2) 99 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:21:26.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:25 vm07 bash[20771]: audit 2026-03-09T21:21:25.491800+0000 mon.a (mon.0) 1112 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:21:26.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:25 vm07 bash[20771]: audit 2026-03-09T21:21:25.491800+0000 mon.a (mon.0) 1112 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:21:26.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:25 vm07 bash[28052]: audit 2026-03-09T21:21:24.905350+0000 mon.a (mon.0) 1108 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:26.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:25 vm07 bash[28052]: audit 2026-03-09T21:21:24.905350+0000 mon.a (mon.0) 1108 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:26.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:25 vm07 bash[28052]: cluster 2026-03-09T21:21:24.912538+0000 mon.a (mon.0) 1109 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-09T21:21:26.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:25 vm07 bash[28052]: cluster 2026-03-09T21:21:24.912538+0000 mon.a (mon.0) 1109 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-09T21:21:26.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:25 vm07 bash[28052]: audit 2026-03-09T21:21:25.142789+0000 mon.a (mon.0) 1110 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:21:26.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:25 vm07 bash[28052]: audit 2026-03-09T21:21:25.142789+0000 mon.a (mon.0) 1110 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:21:26.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:25 vm07 bash[28052]: audit 2026-03-09T21:21:25.150109+0000 mon.a (mon.0) 1111 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:21:26.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:25 vm07 bash[28052]: audit 2026-03-09T21:21:25.150109+0000 mon.a (mon.0) 1111 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:21:26.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:25 vm07 bash[28052]: audit 2026-03-09T21:21:25.484870+0000 mon.c (mon.2) 98 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:21:26.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:25 vm07 bash[28052]: audit 2026-03-09T21:21:25.484870+0000 mon.c (mon.2) 98 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:21:26.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:25 vm07 bash[28052]: audit 2026-03-09T21:21:25.485635+0000 mon.c (mon.2) 99 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:21:26.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:25 vm07 bash[28052]: audit 2026-03-09T21:21:25.485635+0000 mon.c (mon.2) 99 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:21:26.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:25 vm07 bash[28052]: audit 2026-03-09T21:21:25.491800+0000 mon.a (mon.0) 1112 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:21:26.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:25 vm07 bash[28052]: audit 2026-03-09T21:21:25.491800+0000 mon.a (mon.0) 1112 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:21:26.403 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:25 vm10 bash[23387]: audit 2026-03-09T21:21:24.905350+0000 mon.a (mon.0) 1108 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:26.403 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:25 vm10 bash[23387]: audit 2026-03-09T21:21:24.905350+0000 mon.a (mon.0) 1108 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:26.404 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:25 vm10 bash[23387]: cluster 2026-03-09T21:21:24.912538+0000 mon.a (mon.0) 1109 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-09T21:21:26.404 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:25 vm10 bash[23387]: cluster 2026-03-09T21:21:24.912538+0000 mon.a (mon.0) 1109 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-09T21:21:26.404 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:25 vm10 bash[23387]: audit 2026-03-09T21:21:25.142789+0000 mon.a (mon.0) 1110 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:21:26.404 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:25 vm10 bash[23387]: audit 2026-03-09T21:21:25.142789+0000 mon.a (mon.0) 1110 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:21:26.404 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:25 vm10 bash[23387]: audit 2026-03-09T21:21:25.150109+0000 mon.a (mon.0) 1111 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:21:26.404 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:25 vm10 bash[23387]: audit 2026-03-09T21:21:25.150109+0000 mon.a (mon.0) 1111 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:21:26.404 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:25 vm10 bash[23387]: audit 2026-03-09T21:21:25.484870+0000 mon.c (mon.2) 98 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:21:26.404 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:25 vm10 bash[23387]: audit 2026-03-09T21:21:25.484870+0000 mon.c (mon.2) 98 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:21:26.404 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:25 vm10 bash[23387]: audit 2026-03-09T21:21:25.485635+0000 mon.c (mon.2) 99 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:21:26.404 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:25 vm10 bash[23387]: audit 2026-03-09T21:21:25.485635+0000 mon.c (mon.2) 99 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:21:26.404 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:25 vm10 bash[23387]: audit 2026-03-09T21:21:25.491800+0000 mon.a (mon.0) 1112 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:21:26.404 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:25 vm10 bash[23387]: audit 2026-03-09T21:21:25.491800+0000 mon.a (mon.0) 1112 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:21:26.692 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:21:26 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:21:27.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:26 vm10 bash[23387]: cluster 2026-03-09T21:21:25.779060+0000 mgr.y (mgr.24416) 206 : cluster [DBG] pgmap v324: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:27.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:26 vm10 bash[23387]: cluster 2026-03-09T21:21:25.779060+0000 mgr.y (mgr.24416) 206 : cluster [DBG] pgmap v324: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:27.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:26 vm10 bash[23387]: cluster 2026-03-09T21:21:25.918681+0000 mon.a (mon.0) 1113 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-09T21:21:27.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:26 vm10 bash[23387]: cluster 2026-03-09T21:21:25.918681+0000 mon.a (mon.0) 1113 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-09T21:21:27.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:26 vm10 bash[23387]: cluster 2026-03-09T21:21:26.928506+0000 mon.a (mon.0) 1114 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-09T21:21:27.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:26 vm10 bash[23387]: cluster 2026-03-09T21:21:26.928506+0000 mon.a (mon.0) 1114 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-09T21:21:27.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:26 vm07 bash[20771]: cluster 2026-03-09T21:21:25.779060+0000 mgr.y (mgr.24416) 206 : cluster [DBG] pgmap v324: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:27.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:26 vm07 bash[20771]: cluster 2026-03-09T21:21:25.779060+0000 mgr.y (mgr.24416) 206 : cluster [DBG] pgmap v324: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:27.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:26 vm07 bash[20771]: cluster 2026-03-09T21:21:25.918681+0000 mon.a (mon.0) 1113 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-09T21:21:27.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:26 vm07 bash[20771]: cluster 2026-03-09T21:21:25.918681+0000 mon.a (mon.0) 1113 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-09T21:21:27.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:26 vm07 bash[20771]: cluster 2026-03-09T21:21:26.928506+0000 mon.a (mon.0) 1114 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-09T21:21:27.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:26 vm07 bash[20771]: cluster 2026-03-09T21:21:26.928506+0000 mon.a (mon.0) 1114 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-09T21:21:27.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:26 vm07 bash[28052]: cluster 2026-03-09T21:21:25.779060+0000 mgr.y (mgr.24416) 206 : cluster [DBG] pgmap v324: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:27.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:26 vm07 bash[28052]: cluster 2026-03-09T21:21:25.779060+0000 mgr.y (mgr.24416) 206 : cluster [DBG] pgmap v324: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:27.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:26 vm07 bash[28052]: cluster 2026-03-09T21:21:25.918681+0000 mon.a (mon.0) 1113 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-09T21:21:27.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:26 vm07 bash[28052]: cluster 2026-03-09T21:21:25.918681+0000 mon.a (mon.0) 1113 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-09T21:21:27.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:26 vm07 bash[28052]: cluster 2026-03-09T21:21:26.928506+0000 mon.a (mon.0) 1114 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-09T21:21:27.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:26 vm07 bash[28052]: cluster 2026-03-09T21:21:26.928506+0000 mon.a (mon.0) 1114 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-09T21:21:28.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:27 vm07 bash[28052]: audit 2026-03-09T21:21:26.403677+0000 mgr.y (mgr.24416) 207 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:28.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:27 vm07 bash[28052]: audit 2026-03-09T21:21:26.403677+0000 mgr.y (mgr.24416) 207 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:28.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:27 vm07 bash[28052]: audit 2026-03-09T21:21:27.013836+0000 mon.c (mon.2) 100 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:21:28.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:27 vm07 bash[28052]: audit 2026-03-09T21:21:27.013836+0000 mon.c (mon.2) 100 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:21:28.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:27 vm07 bash[28052]: cluster 2026-03-09T21:21:27.919475+0000 mon.a (mon.0) 1115 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-09T21:21:28.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:27 vm07 bash[28052]: cluster 2026-03-09T21:21:27.919475+0000 mon.a (mon.0) 1115 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-09T21:21:28.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:27 vm07 bash[20771]: audit 2026-03-09T21:21:26.403677+0000 mgr.y (mgr.24416) 207 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:28.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:27 vm07 bash[20771]: audit 2026-03-09T21:21:26.403677+0000 mgr.y (mgr.24416) 207 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:28.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:27 vm07 bash[20771]: audit 2026-03-09T21:21:27.013836+0000 mon.c (mon.2) 100 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:21:28.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:27 vm07 bash[20771]: audit 2026-03-09T21:21:27.013836+0000 mon.c (mon.2) 100 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:21:28.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:27 vm07 bash[20771]: cluster 2026-03-09T21:21:27.919475+0000 mon.a (mon.0) 1115 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-09T21:21:28.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:27 vm07 bash[20771]: cluster 2026-03-09T21:21:27.919475+0000 mon.a (mon.0) 1115 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-09T21:21:28.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:27 vm10 bash[23387]: audit 2026-03-09T21:21:26.403677+0000 mgr.y (mgr.24416) 207 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:28.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:27 vm10 bash[23387]: audit 2026-03-09T21:21:26.403677+0000 mgr.y (mgr.24416) 207 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:28.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:27 vm10 bash[23387]: audit 2026-03-09T21:21:27.013836+0000 mon.c (mon.2) 100 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:21:28.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:27 vm10 bash[23387]: audit 2026-03-09T21:21:27.013836+0000 mon.c (mon.2) 100 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:21:28.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:27 vm10 bash[23387]: cluster 2026-03-09T21:21:27.919475+0000 mon.a (mon.0) 1115 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-09T21:21:28.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:27 vm10 bash[23387]: cluster 2026-03-09T21:21:27.919475+0000 mon.a (mon.0) 1115 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-09T21:21:28.973 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:21:28 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:21:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:21:29.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:28 vm07 bash[20771]: cluster 2026-03-09T21:21:27.779684+0000 mgr.y (mgr.24416) 208 : cluster [DBG] pgmap v327: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:29.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:28 vm07 bash[20771]: cluster 2026-03-09T21:21:27.779684+0000 mgr.y (mgr.24416) 208 : cluster [DBG] pgmap v327: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:29.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:28 vm07 bash[20771]: audit 2026-03-09T21:21:27.986250+0000 mon.a (mon.0) 1116 : audit [INF] from='client.? 192.168.123.107:0/786829660' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:29.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:28 vm07 bash[20771]: audit 2026-03-09T21:21:27.986250+0000 mon.a (mon.0) 1116 : audit [INF] from='client.? 192.168.123.107:0/786829660' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:29.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:28 vm07 bash[20771]: audit 2026-03-09T21:21:28.917878+0000 mon.a (mon.0) 1117 : audit [INF] from='client.? 192.168.123.107:0/786829660' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:29.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:28 vm07 bash[20771]: audit 2026-03-09T21:21:28.917878+0000 mon.a (mon.0) 1117 : audit [INF] from='client.? 192.168.123.107:0/786829660' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:29.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:28 vm07 bash[20771]: cluster 2026-03-09T21:21:28.921057+0000 mon.a (mon.0) 1118 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-09T21:21:29.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:28 vm07 bash[20771]: cluster 2026-03-09T21:21:28.921057+0000 mon.a (mon.0) 1118 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-09T21:21:29.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:28 vm07 bash[28052]: cluster 2026-03-09T21:21:27.779684+0000 mgr.y (mgr.24416) 208 : cluster [DBG] pgmap v327: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:29.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:28 vm07 bash[28052]: cluster 2026-03-09T21:21:27.779684+0000 mgr.y (mgr.24416) 208 : cluster [DBG] pgmap v327: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:29.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:28 vm07 bash[28052]: audit 2026-03-09T21:21:27.986250+0000 mon.a (mon.0) 1116 : audit [INF] from='client.? 192.168.123.107:0/786829660' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:29.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:28 vm07 bash[28052]: audit 2026-03-09T21:21:27.986250+0000 mon.a (mon.0) 1116 : audit [INF] from='client.? 192.168.123.107:0/786829660' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:29.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:28 vm07 bash[28052]: audit 2026-03-09T21:21:28.917878+0000 mon.a (mon.0) 1117 : audit [INF] from='client.? 192.168.123.107:0/786829660' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:29.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:28 vm07 bash[28052]: audit 2026-03-09T21:21:28.917878+0000 mon.a (mon.0) 1117 : audit [INF] from='client.? 192.168.123.107:0/786829660' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:29.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:28 vm07 bash[28052]: cluster 2026-03-09T21:21:28.921057+0000 mon.a (mon.0) 1118 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-09T21:21:29.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:28 vm07 bash[28052]: cluster 2026-03-09T21:21:28.921057+0000 mon.a (mon.0) 1118 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-09T21:21:29.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:28 vm10 bash[23387]: cluster 2026-03-09T21:21:27.779684+0000 mgr.y (mgr.24416) 208 : cluster [DBG] pgmap v327: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:29.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:28 vm10 bash[23387]: cluster 2026-03-09T21:21:27.779684+0000 mgr.y (mgr.24416) 208 : cluster [DBG] pgmap v327: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:29.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:28 vm10 bash[23387]: audit 2026-03-09T21:21:27.986250+0000 mon.a (mon.0) 1116 : audit [INF] from='client.? 192.168.123.107:0/786829660' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:29.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:28 vm10 bash[23387]: audit 2026-03-09T21:21:27.986250+0000 mon.a (mon.0) 1116 : audit [INF] from='client.? 192.168.123.107:0/786829660' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:29.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:28 vm10 bash[23387]: audit 2026-03-09T21:21:28.917878+0000 mon.a (mon.0) 1117 : audit [INF] from='client.? 192.168.123.107:0/786829660' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:29.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:28 vm10 bash[23387]: audit 2026-03-09T21:21:28.917878+0000 mon.a (mon.0) 1117 : audit [INF] from='client.? 192.168.123.107:0/786829660' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:29.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:28 vm10 bash[23387]: cluster 2026-03-09T21:21:28.921057+0000 mon.a (mon.0) 1118 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-09T21:21:29.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:28 vm10 bash[23387]: cluster 2026-03-09T21:21:28.921057+0000 mon.a (mon.0) 1118 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-09T21:21:29.930 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_xattrs_op PASSED [ 63%] 2026-03-09T21:21:31.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:30 vm10 bash[23387]: cluster 2026-03-09T21:21:29.780080+0000 mgr.y (mgr.24416) 209 : cluster [DBG] pgmap v330: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:31.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:30 vm10 bash[23387]: cluster 2026-03-09T21:21:29.780080+0000 mgr.y (mgr.24416) 209 : cluster [DBG] pgmap v330: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:31.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:30 vm10 bash[23387]: cluster 2026-03-09T21:21:29.924433+0000 mon.a (mon.0) 1119 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-09T21:21:31.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:30 vm10 bash[23387]: cluster 2026-03-09T21:21:29.924433+0000 mon.a (mon.0) 1119 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-09T21:21:31.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:30 vm07 bash[20771]: cluster 2026-03-09T21:21:29.780080+0000 mgr.y (mgr.24416) 209 : cluster [DBG] pgmap v330: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:31.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:30 vm07 bash[20771]: cluster 2026-03-09T21:21:29.780080+0000 mgr.y (mgr.24416) 209 : cluster [DBG] pgmap v330: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:31.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:30 vm07 bash[20771]: cluster 2026-03-09T21:21:29.924433+0000 mon.a (mon.0) 1119 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-09T21:21:31.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:30 vm07 bash[20771]: cluster 2026-03-09T21:21:29.924433+0000 mon.a (mon.0) 1119 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-09T21:21:31.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:30 vm07 bash[28052]: cluster 2026-03-09T21:21:29.780080+0000 mgr.y (mgr.24416) 209 : cluster [DBG] pgmap v330: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:31.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:30 vm07 bash[28052]: cluster 2026-03-09T21:21:29.780080+0000 mgr.y (mgr.24416) 209 : cluster [DBG] pgmap v330: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:31.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:30 vm07 bash[28052]: cluster 2026-03-09T21:21:29.924433+0000 mon.a (mon.0) 1119 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-09T21:21:31.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:30 vm07 bash[28052]: cluster 2026-03-09T21:21:29.924433+0000 mon.a (mon.0) 1119 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-09T21:21:32.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:31 vm07 bash[20771]: cluster 2026-03-09T21:21:30.966780+0000 mon.a (mon.0) 1120 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-09T21:21:32.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:31 vm07 bash[20771]: cluster 2026-03-09T21:21:30.966780+0000 mon.a (mon.0) 1120 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-09T21:21:32.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:31 vm07 bash[28052]: cluster 2026-03-09T21:21:30.966780+0000 mon.a (mon.0) 1120 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-09T21:21:32.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:31 vm07 bash[28052]: cluster 2026-03-09T21:21:30.966780+0000 mon.a (mon.0) 1120 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-09T21:21:32.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:31 vm10 bash[23387]: cluster 2026-03-09T21:21:30.966780+0000 mon.a (mon.0) 1120 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-09T21:21:32.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:31 vm10 bash[23387]: cluster 2026-03-09T21:21:30.966780+0000 mon.a (mon.0) 1120 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-09T21:21:33.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:32 vm07 bash[20771]: cluster 2026-03-09T21:21:31.780496+0000 mgr.y (mgr.24416) 210 : cluster [DBG] pgmap v333: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:33.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:32 vm07 bash[20771]: cluster 2026-03-09T21:21:31.780496+0000 mgr.y (mgr.24416) 210 : cluster [DBG] pgmap v333: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:33.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:32 vm07 bash[20771]: cluster 2026-03-09T21:21:31.951160+0000 mon.a (mon.0) 1121 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-09T21:21:33.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:32 vm07 bash[20771]: cluster 2026-03-09T21:21:31.951160+0000 mon.a (mon.0) 1121 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-09T21:21:33.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:32 vm07 bash[20771]: audit 2026-03-09T21:21:32.013620+0000 mon.c (mon.2) 101 : audit [INF] from='client.? 192.168.123.107:0/2291320760' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:33.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:32 vm07 bash[20771]: audit 2026-03-09T21:21:32.013620+0000 mon.c (mon.2) 101 : audit [INF] from='client.? 192.168.123.107:0/2291320760' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:33.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:32 vm07 bash[20771]: audit 2026-03-09T21:21:32.013977+0000 mon.a (mon.0) 1122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:33.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:32 vm07 bash[20771]: audit 2026-03-09T21:21:32.013977+0000 mon.a (mon.0) 1122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:33.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:32 vm07 bash[20771]: audit 2026-03-09T21:21:32.949959+0000 mon.a (mon.0) 1123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:33.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:32 vm07 bash[20771]: audit 2026-03-09T21:21:32.949959+0000 mon.a (mon.0) 1123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:33.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:32 vm07 bash[20771]: cluster 2026-03-09T21:21:32.953142+0000 mon.a (mon.0) 1124 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-09T21:21:33.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:32 vm07 bash[20771]: cluster 2026-03-09T21:21:32.953142+0000 mon.a (mon.0) 1124 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-09T21:21:33.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:32 vm07 bash[28052]: cluster 2026-03-09T21:21:31.780496+0000 mgr.y (mgr.24416) 210 : cluster [DBG] pgmap v333: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:33.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:32 vm07 bash[28052]: cluster 2026-03-09T21:21:31.780496+0000 mgr.y (mgr.24416) 210 : cluster [DBG] pgmap v333: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:33.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:32 vm07 bash[28052]: cluster 2026-03-09T21:21:31.951160+0000 mon.a (mon.0) 1121 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-09T21:21:33.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:32 vm07 bash[28052]: cluster 2026-03-09T21:21:31.951160+0000 mon.a (mon.0) 1121 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-09T21:21:33.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:32 vm07 bash[28052]: audit 2026-03-09T21:21:32.013620+0000 mon.c (mon.2) 101 : audit [INF] from='client.? 192.168.123.107:0/2291320760' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:33.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:32 vm07 bash[28052]: audit 2026-03-09T21:21:32.013620+0000 mon.c (mon.2) 101 : audit [INF] from='client.? 192.168.123.107:0/2291320760' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:33.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:32 vm07 bash[28052]: audit 2026-03-09T21:21:32.013977+0000 mon.a (mon.0) 1122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:33.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:32 vm07 bash[28052]: audit 2026-03-09T21:21:32.013977+0000 mon.a (mon.0) 1122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:33.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:32 vm07 bash[28052]: audit 2026-03-09T21:21:32.949959+0000 mon.a (mon.0) 1123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:33.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:32 vm07 bash[28052]: audit 2026-03-09T21:21:32.949959+0000 mon.a (mon.0) 1123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:33.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:32 vm07 bash[28052]: cluster 2026-03-09T21:21:32.953142+0000 mon.a (mon.0) 1124 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-09T21:21:33.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:32 vm07 bash[28052]: cluster 2026-03-09T21:21:32.953142+0000 mon.a (mon.0) 1124 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-09T21:21:33.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:32 vm10 bash[23387]: cluster 2026-03-09T21:21:31.780496+0000 mgr.y (mgr.24416) 210 : cluster [DBG] pgmap v333: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:33.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:32 vm10 bash[23387]: cluster 2026-03-09T21:21:31.780496+0000 mgr.y (mgr.24416) 210 : cluster [DBG] pgmap v333: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:33.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:32 vm10 bash[23387]: cluster 2026-03-09T21:21:31.951160+0000 mon.a (mon.0) 1121 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-09T21:21:33.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:32 vm10 bash[23387]: cluster 2026-03-09T21:21:31.951160+0000 mon.a (mon.0) 1121 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-09T21:21:33.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:32 vm10 bash[23387]: audit 2026-03-09T21:21:32.013620+0000 mon.c (mon.2) 101 : audit [INF] from='client.? 192.168.123.107:0/2291320760' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:33.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:32 vm10 bash[23387]: audit 2026-03-09T21:21:32.013620+0000 mon.c (mon.2) 101 : audit [INF] from='client.? 192.168.123.107:0/2291320760' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:33.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:32 vm10 bash[23387]: audit 2026-03-09T21:21:32.013977+0000 mon.a (mon.0) 1122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:33.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:32 vm10 bash[23387]: audit 2026-03-09T21:21:32.013977+0000 mon.a (mon.0) 1122 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:33.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:32 vm10 bash[23387]: audit 2026-03-09T21:21:32.949959+0000 mon.a (mon.0) 1123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:33.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:32 vm10 bash[23387]: audit 2026-03-09T21:21:32.949959+0000 mon.a (mon.0) 1123 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:33.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:32 vm10 bash[23387]: cluster 2026-03-09T21:21:32.953142+0000 mon.a (mon.0) 1124 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-09T21:21:33.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:32 vm10 bash[23387]: cluster 2026-03-09T21:21:32.953142+0000 mon.a (mon.0) 1124 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-09T21:21:34.078 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_locator PASSED [ 64%] 2026-03-09T21:21:34.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:34 vm10 bash[23387]: cluster 2026-03-09T21:21:33.780851+0000 mgr.y (mgr.24416) 211 : cluster [DBG] pgmap v336: 196 pgs: 196 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T21:21:34.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:34 vm10 bash[23387]: cluster 2026-03-09T21:21:33.780851+0000 mgr.y (mgr.24416) 211 : cluster [DBG] pgmap v336: 196 pgs: 196 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T21:21:34.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:34 vm07 bash[20771]: cluster 2026-03-09T21:21:33.780851+0000 mgr.y (mgr.24416) 211 : cluster [DBG] pgmap v336: 196 pgs: 196 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T21:21:34.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:34 vm07 bash[20771]: cluster 2026-03-09T21:21:33.780851+0000 mgr.y (mgr.24416) 211 : cluster [DBG] pgmap v336: 196 pgs: 196 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T21:21:34.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:34 vm07 bash[28052]: cluster 2026-03-09T21:21:33.780851+0000 mgr.y (mgr.24416) 211 : cluster [DBG] pgmap v336: 196 pgs: 196 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T21:21:34.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:34 vm07 bash[28052]: cluster 2026-03-09T21:21:33.780851+0000 mgr.y (mgr.24416) 211 : cluster [DBG] pgmap v336: 196 pgs: 196 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T21:21:35.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:35 vm10 bash[23387]: cluster 2026-03-09T21:21:34.074285+0000 mon.a (mon.0) 1125 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-09T21:21:35.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:35 vm10 bash[23387]: cluster 2026-03-09T21:21:34.074285+0000 mon.a (mon.0) 1125 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-09T21:21:35.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:35 vm07 bash[20771]: cluster 2026-03-09T21:21:34.074285+0000 mon.a (mon.0) 1125 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-09T21:21:35.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:35 vm07 bash[20771]: cluster 2026-03-09T21:21:34.074285+0000 mon.a (mon.0) 1125 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-09T21:21:35.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:35 vm07 bash[28052]: cluster 2026-03-09T21:21:34.074285+0000 mon.a (mon.0) 1125 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-09T21:21:35.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:35 vm07 bash[28052]: cluster 2026-03-09T21:21:34.074285+0000 mon.a (mon.0) 1125 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-09T21:21:36.414 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:36 vm10 bash[23387]: cluster 2026-03-09T21:21:35.164189+0000 mon.a (mon.0) 1126 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-09T21:21:36.414 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:36 vm10 bash[23387]: cluster 2026-03-09T21:21:35.164189+0000 mon.a (mon.0) 1126 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-09T21:21:36.414 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:36 vm10 bash[23387]: cluster 2026-03-09T21:21:35.781127+0000 mgr.y (mgr.24416) 212 : cluster [DBG] pgmap v339: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:36.414 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:36 vm10 bash[23387]: cluster 2026-03-09T21:21:35.781127+0000 mgr.y (mgr.24416) 212 : cluster [DBG] pgmap v339: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:36.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:36 vm07 bash[28052]: cluster 2026-03-09T21:21:35.164189+0000 mon.a (mon.0) 1126 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-09T21:21:36.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:36 vm07 bash[28052]: cluster 2026-03-09T21:21:35.164189+0000 mon.a (mon.0) 1126 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-09T21:21:36.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:36 vm07 bash[28052]: cluster 2026-03-09T21:21:35.781127+0000 mgr.y (mgr.24416) 212 : cluster [DBG] pgmap v339: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:36.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:36 vm07 bash[28052]: cluster 2026-03-09T21:21:35.781127+0000 mgr.y (mgr.24416) 212 : cluster [DBG] pgmap v339: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:36.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:36 vm07 bash[20771]: cluster 2026-03-09T21:21:35.164189+0000 mon.a (mon.0) 1126 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-09T21:21:36.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:36 vm07 bash[20771]: cluster 2026-03-09T21:21:35.164189+0000 mon.a (mon.0) 1126 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-09T21:21:36.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:36 vm07 bash[20771]: cluster 2026-03-09T21:21:35.781127+0000 mgr.y (mgr.24416) 212 : cluster [DBG] pgmap v339: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:36.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:36 vm07 bash[20771]: cluster 2026-03-09T21:21:35.781127+0000 mgr.y (mgr.24416) 212 : cluster [DBG] pgmap v339: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:36.692 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:21:36 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:21:37.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:37 vm07 bash[20771]: cluster 2026-03-09T21:21:36.186443+0000 mon.a (mon.0) 1127 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-09T21:21:37.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:37 vm07 bash[20771]: cluster 2026-03-09T21:21:36.186443+0000 mon.a (mon.0) 1127 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-09T21:21:37.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:37 vm07 bash[20771]: audit 2026-03-09T21:21:36.208550+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.107:0/2492869843' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:37.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:37 vm07 bash[20771]: audit 2026-03-09T21:21:36.208550+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.107:0/2492869843' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:37.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:37 vm07 bash[20771]: audit 2026-03-09T21:21:36.209960+0000 mon.a (mon.0) 1128 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:37.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:37 vm07 bash[20771]: audit 2026-03-09T21:21:36.209960+0000 mon.a (mon.0) 1128 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:37.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:37 vm07 bash[20771]: audit 2026-03-09T21:21:36.414425+0000 mgr.y (mgr.24416) 213 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:37.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:37 vm07 bash[20771]: audit 2026-03-09T21:21:36.414425+0000 mgr.y (mgr.24416) 213 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:37.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:37 vm07 bash[28052]: cluster 2026-03-09T21:21:36.186443+0000 mon.a (mon.0) 1127 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-09T21:21:37.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:37 vm07 bash[28052]: cluster 2026-03-09T21:21:36.186443+0000 mon.a (mon.0) 1127 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-09T21:21:37.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:37 vm07 bash[28052]: audit 2026-03-09T21:21:36.208550+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.107:0/2492869843' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:37.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:37 vm07 bash[28052]: audit 2026-03-09T21:21:36.208550+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.107:0/2492869843' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:37.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:37 vm07 bash[28052]: audit 2026-03-09T21:21:36.209960+0000 mon.a (mon.0) 1128 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:37.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:37 vm07 bash[28052]: audit 2026-03-09T21:21:36.209960+0000 mon.a (mon.0) 1128 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:37.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:37 vm07 bash[28052]: audit 2026-03-09T21:21:36.414425+0000 mgr.y (mgr.24416) 213 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:37.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:37 vm07 bash[28052]: audit 2026-03-09T21:21:36.414425+0000 mgr.y (mgr.24416) 213 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:37.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:37 vm10 bash[23387]: cluster 2026-03-09T21:21:36.186443+0000 mon.a (mon.0) 1127 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-09T21:21:37.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:37 vm10 bash[23387]: cluster 2026-03-09T21:21:36.186443+0000 mon.a (mon.0) 1127 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-09T21:21:37.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:37 vm10 bash[23387]: audit 2026-03-09T21:21:36.208550+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.107:0/2492869843' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:37.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:37 vm10 bash[23387]: audit 2026-03-09T21:21:36.208550+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.107:0/2492869843' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:37.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:37 vm10 bash[23387]: audit 2026-03-09T21:21:36.209960+0000 mon.a (mon.0) 1128 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:37.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:37 vm10 bash[23387]: audit 2026-03-09T21:21:36.209960+0000 mon.a (mon.0) 1128 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:37.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:37 vm10 bash[23387]: audit 2026-03-09T21:21:36.414425+0000 mgr.y (mgr.24416) 213 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:37.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:37 vm10 bash[23387]: audit 2026-03-09T21:21:36.414425+0000 mgr.y (mgr.24416) 213 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:38.312 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_operate_aio_write_op PASSED [ 65%] 2026-03-09T21:21:38.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:38 vm07 bash[20771]: audit 2026-03-09T21:21:37.296156+0000 mon.a (mon.0) 1129 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:38.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:38 vm07 bash[20771]: audit 2026-03-09T21:21:37.296156+0000 mon.a (mon.0) 1129 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:38.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:38 vm07 bash[20771]: cluster 2026-03-09T21:21:37.300287+0000 mon.a (mon.0) 1130 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-09T21:21:38.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:38 vm07 bash[20771]: cluster 2026-03-09T21:21:37.300287+0000 mon.a (mon.0) 1130 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-09T21:21:38.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:38 vm07 bash[20771]: cluster 2026-03-09T21:21:37.781570+0000 mgr.y (mgr.24416) 214 : cluster [DBG] pgmap v342: 196 pgs: 196 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:38.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:38 vm07 bash[20771]: cluster 2026-03-09T21:21:37.781570+0000 mgr.y (mgr.24416) 214 : cluster [DBG] pgmap v342: 196 pgs: 196 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:38.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:38 vm07 bash[20771]: cluster 2026-03-09T21:21:38.302861+0000 mon.a (mon.0) 1131 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-09T21:21:38.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:38 vm07 bash[20771]: cluster 2026-03-09T21:21:38.302861+0000 mon.a (mon.0) 1131 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-09T21:21:38.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:38 vm07 bash[28052]: audit 2026-03-09T21:21:37.296156+0000 mon.a (mon.0) 1129 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:38.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:38 vm07 bash[28052]: audit 2026-03-09T21:21:37.296156+0000 mon.a (mon.0) 1129 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:38.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:38 vm07 bash[28052]: cluster 2026-03-09T21:21:37.300287+0000 mon.a (mon.0) 1130 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-09T21:21:38.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:38 vm07 bash[28052]: cluster 2026-03-09T21:21:37.300287+0000 mon.a (mon.0) 1130 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-09T21:21:38.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:38 vm07 bash[28052]: cluster 2026-03-09T21:21:37.781570+0000 mgr.y (mgr.24416) 214 : cluster [DBG] pgmap v342: 196 pgs: 196 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:38.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:38 vm07 bash[28052]: cluster 2026-03-09T21:21:37.781570+0000 mgr.y (mgr.24416) 214 : cluster [DBG] pgmap v342: 196 pgs: 196 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:38.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:38 vm07 bash[28052]: cluster 2026-03-09T21:21:38.302861+0000 mon.a (mon.0) 1131 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-09T21:21:38.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:38 vm07 bash[28052]: cluster 2026-03-09T21:21:38.302861+0000 mon.a (mon.0) 1131 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-09T21:21:38.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:38 vm10 bash[23387]: audit 2026-03-09T21:21:37.296156+0000 mon.a (mon.0) 1129 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:38.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:38 vm10 bash[23387]: audit 2026-03-09T21:21:37.296156+0000 mon.a (mon.0) 1129 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:38.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:38 vm10 bash[23387]: cluster 2026-03-09T21:21:37.300287+0000 mon.a (mon.0) 1130 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-09T21:21:38.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:38 vm10 bash[23387]: cluster 2026-03-09T21:21:37.300287+0000 mon.a (mon.0) 1130 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-09T21:21:38.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:38 vm10 bash[23387]: cluster 2026-03-09T21:21:37.781570+0000 mgr.y (mgr.24416) 214 : cluster [DBG] pgmap v342: 196 pgs: 196 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:38.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:38 vm10 bash[23387]: cluster 2026-03-09T21:21:37.781570+0000 mgr.y (mgr.24416) 214 : cluster [DBG] pgmap v342: 196 pgs: 196 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:38.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:38 vm10 bash[23387]: cluster 2026-03-09T21:21:38.302861+0000 mon.a (mon.0) 1131 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-09T21:21:38.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:38 vm10 bash[23387]: cluster 2026-03-09T21:21:38.302861+0000 mon.a (mon.0) 1131 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-09T21:21:39.115 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:21:38 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:21:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:21:40.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:40 vm07 bash[20771]: cluster 2026-03-09T21:21:39.428232+0000 mon.a (mon.0) 1132 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-09T21:21:40.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:40 vm07 bash[20771]: cluster 2026-03-09T21:21:39.428232+0000 mon.a (mon.0) 1132 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-09T21:21:40.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:40 vm07 bash[20771]: cluster 2026-03-09T21:21:39.781930+0000 mgr.y (mgr.24416) 215 : cluster [DBG] pgmap v345: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:40.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:40 vm07 bash[20771]: cluster 2026-03-09T21:21:39.781930+0000 mgr.y (mgr.24416) 215 : cluster [DBG] pgmap v345: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:40.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:40 vm07 bash[28052]: cluster 2026-03-09T21:21:39.428232+0000 mon.a (mon.0) 1132 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-09T21:21:40.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:40 vm07 bash[28052]: cluster 2026-03-09T21:21:39.428232+0000 mon.a (mon.0) 1132 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-09T21:21:40.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:40 vm07 bash[28052]: cluster 2026-03-09T21:21:39.781930+0000 mgr.y (mgr.24416) 215 : cluster [DBG] pgmap v345: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:40.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:40 vm07 bash[28052]: cluster 2026-03-09T21:21:39.781930+0000 mgr.y (mgr.24416) 215 : cluster [DBG] pgmap v345: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:40.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:40 vm10 bash[23387]: cluster 2026-03-09T21:21:39.428232+0000 mon.a (mon.0) 1132 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-09T21:21:40.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:40 vm10 bash[23387]: cluster 2026-03-09T21:21:39.428232+0000 mon.a (mon.0) 1132 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-09T21:21:40.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:40 vm10 bash[23387]: cluster 2026-03-09T21:21:39.781930+0000 mgr.y (mgr.24416) 215 : cluster [DBG] pgmap v345: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:40.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:40 vm10 bash[23387]: cluster 2026-03-09T21:21:39.781930+0000 mgr.y (mgr.24416) 215 : cluster [DBG] pgmap v345: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:41.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:41 vm07 bash[20771]: cluster 2026-03-09T21:21:40.430461+0000 mon.a (mon.0) 1133 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-09T21:21:41.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:41 vm07 bash[20771]: cluster 2026-03-09T21:21:40.430461+0000 mon.a (mon.0) 1133 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-09T21:21:41.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:41 vm07 bash[20771]: audit 2026-03-09T21:21:40.490974+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.107:0/2429575429' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:41.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:41 vm07 bash[20771]: audit 2026-03-09T21:21:40.490974+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.107:0/2429575429' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:41.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:41 vm07 bash[20771]: audit 2026-03-09T21:21:40.491606+0000 mon.a (mon.0) 1134 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:41.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:41 vm07 bash[20771]: audit 2026-03-09T21:21:40.491606+0000 mon.a (mon.0) 1134 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:41.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:41 vm07 bash[28052]: cluster 2026-03-09T21:21:40.430461+0000 mon.a (mon.0) 1133 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-09T21:21:41.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:41 vm07 bash[28052]: cluster 2026-03-09T21:21:40.430461+0000 mon.a (mon.0) 1133 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-09T21:21:41.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:41 vm07 bash[28052]: audit 2026-03-09T21:21:40.490974+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.107:0/2429575429' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:41.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:41 vm07 bash[28052]: audit 2026-03-09T21:21:40.490974+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.107:0/2429575429' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:41.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:41 vm07 bash[28052]: audit 2026-03-09T21:21:40.491606+0000 mon.a (mon.0) 1134 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:41.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:41 vm07 bash[28052]: audit 2026-03-09T21:21:40.491606+0000 mon.a (mon.0) 1134 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:41.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:41 vm10 bash[23387]: cluster 2026-03-09T21:21:40.430461+0000 mon.a (mon.0) 1133 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-09T21:21:41.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:41 vm10 bash[23387]: cluster 2026-03-09T21:21:40.430461+0000 mon.a (mon.0) 1133 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-09T21:21:41.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:41 vm10 bash[23387]: audit 2026-03-09T21:21:40.490974+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.107:0/2429575429' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:41.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:41 vm10 bash[23387]: audit 2026-03-09T21:21:40.490974+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.107:0/2429575429' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:41.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:41 vm10 bash[23387]: audit 2026-03-09T21:21:40.491606+0000 mon.a (mon.0) 1134 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:41.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:41 vm10 bash[23387]: audit 2026-03-09T21:21:40.491606+0000 mon.a (mon.0) 1134 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:42.479 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_write PASSED [ 67%] 2026-03-09T21:21:42.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:42 vm07 bash[20771]: audit 2026-03-09T21:21:41.467875+0000 mon.a (mon.0) 1135 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:42.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:42 vm07 bash[20771]: audit 2026-03-09T21:21:41.467875+0000 mon.a (mon.0) 1135 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:42.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:42 vm07 bash[20771]: cluster 2026-03-09T21:21:41.471492+0000 mon.a (mon.0) 1136 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-09T21:21:42.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:42 vm07 bash[20771]: cluster 2026-03-09T21:21:41.471492+0000 mon.a (mon.0) 1136 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-09T21:21:42.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:42 vm07 bash[20771]: cluster 2026-03-09T21:21:41.782260+0000 mgr.y (mgr.24416) 216 : cluster [DBG] pgmap v348: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:42.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:42 vm07 bash[20771]: cluster 2026-03-09T21:21:41.782260+0000 mgr.y (mgr.24416) 216 : cluster [DBG] pgmap v348: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:42.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:42 vm07 bash[20771]: audit 2026-03-09T21:21:42.020388+0000 mon.c (mon.2) 102 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:21:42.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:42 vm07 bash[20771]: audit 2026-03-09T21:21:42.020388+0000 mon.c (mon.2) 102 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:21:42.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:42 vm07 bash[28052]: audit 2026-03-09T21:21:41.467875+0000 mon.a (mon.0) 1135 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:42.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:42 vm07 bash[28052]: audit 2026-03-09T21:21:41.467875+0000 mon.a (mon.0) 1135 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:42.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:42 vm07 bash[28052]: cluster 2026-03-09T21:21:41.471492+0000 mon.a (mon.0) 1136 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-09T21:21:42.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:42 vm07 bash[28052]: cluster 2026-03-09T21:21:41.471492+0000 mon.a (mon.0) 1136 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-09T21:21:42.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:42 vm07 bash[28052]: cluster 2026-03-09T21:21:41.782260+0000 mgr.y (mgr.24416) 216 : cluster [DBG] pgmap v348: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:42.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:42 vm07 bash[28052]: cluster 2026-03-09T21:21:41.782260+0000 mgr.y (mgr.24416) 216 : cluster [DBG] pgmap v348: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:42.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:42 vm07 bash[28052]: audit 2026-03-09T21:21:42.020388+0000 mon.c (mon.2) 102 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:21:42.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:42 vm07 bash[28052]: audit 2026-03-09T21:21:42.020388+0000 mon.c (mon.2) 102 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:21:42.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:42 vm10 bash[23387]: audit 2026-03-09T21:21:41.467875+0000 mon.a (mon.0) 1135 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:42.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:42 vm10 bash[23387]: audit 2026-03-09T21:21:41.467875+0000 mon.a (mon.0) 1135 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:42.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:42 vm10 bash[23387]: cluster 2026-03-09T21:21:41.471492+0000 mon.a (mon.0) 1136 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-09T21:21:42.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:42 vm10 bash[23387]: cluster 2026-03-09T21:21:41.471492+0000 mon.a (mon.0) 1136 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-09T21:21:42.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:42 vm10 bash[23387]: cluster 2026-03-09T21:21:41.782260+0000 mgr.y (mgr.24416) 216 : cluster [DBG] pgmap v348: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:42.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:42 vm10 bash[23387]: cluster 2026-03-09T21:21:41.782260+0000 mgr.y (mgr.24416) 216 : cluster [DBG] pgmap v348: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:42.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:42 vm10 bash[23387]: audit 2026-03-09T21:21:42.020388+0000 mon.c (mon.2) 102 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:21:42.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:42 vm10 bash[23387]: audit 2026-03-09T21:21:42.020388+0000 mon.c (mon.2) 102 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:21:43.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:43 vm07 bash[20771]: cluster 2026-03-09T21:21:42.473471+0000 mon.a (mon.0) 1137 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-09T21:21:43.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:43 vm07 bash[20771]: cluster 2026-03-09T21:21:42.473471+0000 mon.a (mon.0) 1137 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-09T21:21:43.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:43 vm07 bash[28052]: cluster 2026-03-09T21:21:42.473471+0000 mon.a (mon.0) 1137 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-09T21:21:43.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:43 vm07 bash[28052]: cluster 2026-03-09T21:21:42.473471+0000 mon.a (mon.0) 1137 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-09T21:21:43.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:43 vm10 bash[23387]: cluster 2026-03-09T21:21:42.473471+0000 mon.a (mon.0) 1137 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-09T21:21:43.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:43 vm10 bash[23387]: cluster 2026-03-09T21:21:42.473471+0000 mon.a (mon.0) 1137 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-09T21:21:44.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:44 vm07 bash[20771]: cluster 2026-03-09T21:21:43.500366+0000 mon.a (mon.0) 1138 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-09T21:21:44.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:44 vm07 bash[20771]: cluster 2026-03-09T21:21:43.500366+0000 mon.a (mon.0) 1138 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-09T21:21:44.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:44 vm07 bash[20771]: cluster 2026-03-09T21:21:43.782531+0000 mgr.y (mgr.24416) 217 : cluster [DBG] pgmap v351: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 442 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:44.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:44 vm07 bash[20771]: cluster 2026-03-09T21:21:43.782531+0000 mgr.y (mgr.24416) 217 : cluster [DBG] pgmap v351: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 442 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:44.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:44 vm07 bash[28052]: cluster 2026-03-09T21:21:43.500366+0000 mon.a (mon.0) 1138 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-09T21:21:44.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:44 vm07 bash[28052]: cluster 2026-03-09T21:21:43.500366+0000 mon.a (mon.0) 1138 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-09T21:21:44.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:44 vm07 bash[28052]: cluster 2026-03-09T21:21:43.782531+0000 mgr.y (mgr.24416) 217 : cluster [DBG] pgmap v351: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 442 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:44.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:44 vm07 bash[28052]: cluster 2026-03-09T21:21:43.782531+0000 mgr.y (mgr.24416) 217 : cluster [DBG] pgmap v351: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 442 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:44.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:44 vm10 bash[23387]: cluster 2026-03-09T21:21:43.500366+0000 mon.a (mon.0) 1138 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-09T21:21:44.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:44 vm10 bash[23387]: cluster 2026-03-09T21:21:43.500366+0000 mon.a (mon.0) 1138 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-09T21:21:44.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:44 vm10 bash[23387]: cluster 2026-03-09T21:21:43.782531+0000 mgr.y (mgr.24416) 217 : cluster [DBG] pgmap v351: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 442 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:44.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:44 vm10 bash[23387]: cluster 2026-03-09T21:21:43.782531+0000 mgr.y (mgr.24416) 217 : cluster [DBG] pgmap v351: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 442 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:45.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:45 vm07 bash[20771]: cluster 2026-03-09T21:21:44.519712+0000 mon.a (mon.0) 1139 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-09T21:21:45.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:45 vm07 bash[20771]: cluster 2026-03-09T21:21:44.519712+0000 mon.a (mon.0) 1139 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-09T21:21:45.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:45 vm07 bash[20771]: audit 2026-03-09T21:21:44.563250+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.107:0/3805713076' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:45.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:45 vm07 bash[20771]: audit 2026-03-09T21:21:44.563250+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.107:0/3805713076' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:45.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:45 vm07 bash[20771]: audit 2026-03-09T21:21:44.564539+0000 mon.a (mon.0) 1140 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:45.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:45 vm07 bash[20771]: audit 2026-03-09T21:21:44.564539+0000 mon.a (mon.0) 1140 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:45.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:45 vm07 bash[28052]: cluster 2026-03-09T21:21:44.519712+0000 mon.a (mon.0) 1139 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-09T21:21:45.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:45 vm07 bash[28052]: cluster 2026-03-09T21:21:44.519712+0000 mon.a (mon.0) 1139 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-09T21:21:45.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:45 vm07 bash[28052]: audit 2026-03-09T21:21:44.563250+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.107:0/3805713076' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:45.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:45 vm07 bash[28052]: audit 2026-03-09T21:21:44.563250+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.107:0/3805713076' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:45.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:45 vm07 bash[28052]: audit 2026-03-09T21:21:44.564539+0000 mon.a (mon.0) 1140 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:45.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:45 vm07 bash[28052]: audit 2026-03-09T21:21:44.564539+0000 mon.a (mon.0) 1140 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:45.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:45 vm10 bash[23387]: cluster 2026-03-09T21:21:44.519712+0000 mon.a (mon.0) 1139 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-09T21:21:45.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:45 vm10 bash[23387]: cluster 2026-03-09T21:21:44.519712+0000 mon.a (mon.0) 1139 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-09T21:21:45.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:45 vm10 bash[23387]: audit 2026-03-09T21:21:44.563250+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.107:0/3805713076' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:45.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:45 vm10 bash[23387]: audit 2026-03-09T21:21:44.563250+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.107:0/3805713076' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:45.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:45 vm10 bash[23387]: audit 2026-03-09T21:21:44.564539+0000 mon.a (mon.0) 1140 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:45.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:45 vm10 bash[23387]: audit 2026-03-09T21:21:44.564539+0000 mon.a (mon.0) 1140 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:46.543 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_cmpext PASSED [ 68%] 2026-03-09T21:21:46.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:46 vm10 bash[23387]: audit 2026-03-09T21:21:45.526463+0000 mon.a (mon.0) 1141 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:46.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:46 vm10 bash[23387]: audit 2026-03-09T21:21:45.526463+0000 mon.a (mon.0) 1141 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:46.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:46 vm10 bash[23387]: cluster 2026-03-09T21:21:45.534503+0000 mon.a (mon.0) 1142 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-09T21:21:46.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:46 vm10 bash[23387]: cluster 2026-03-09T21:21:45.534503+0000 mon.a (mon.0) 1142 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-09T21:21:46.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:46 vm10 bash[23387]: cluster 2026-03-09T21:21:45.782802+0000 mgr.y (mgr.24416) 218 : cluster [DBG] pgmap v354: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 442 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:46.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:46 vm10 bash[23387]: cluster 2026-03-09T21:21:45.782802+0000 mgr.y (mgr.24416) 218 : cluster [DBG] pgmap v354: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 442 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:46.692 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:21:46 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:21:46.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:46 vm07 bash[20771]: audit 2026-03-09T21:21:45.526463+0000 mon.a (mon.0) 1141 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:46.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:46 vm07 bash[20771]: audit 2026-03-09T21:21:45.526463+0000 mon.a (mon.0) 1141 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:46.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:46 vm07 bash[20771]: cluster 2026-03-09T21:21:45.534503+0000 mon.a (mon.0) 1142 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-09T21:21:46.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:46 vm07 bash[20771]: cluster 2026-03-09T21:21:45.534503+0000 mon.a (mon.0) 1142 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-09T21:21:46.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:46 vm07 bash[20771]: cluster 2026-03-09T21:21:45.782802+0000 mgr.y (mgr.24416) 218 : cluster [DBG] pgmap v354: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 442 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:46.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:46 vm07 bash[20771]: cluster 2026-03-09T21:21:45.782802+0000 mgr.y (mgr.24416) 218 : cluster [DBG] pgmap v354: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 442 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:46.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:46 vm07 bash[28052]: audit 2026-03-09T21:21:45.526463+0000 mon.a (mon.0) 1141 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:46.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:46 vm07 bash[28052]: audit 2026-03-09T21:21:45.526463+0000 mon.a (mon.0) 1141 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:46.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:46 vm07 bash[28052]: cluster 2026-03-09T21:21:45.534503+0000 mon.a (mon.0) 1142 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-09T21:21:46.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:46 vm07 bash[28052]: cluster 2026-03-09T21:21:45.534503+0000 mon.a (mon.0) 1142 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-09T21:21:46.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:46 vm07 bash[28052]: cluster 2026-03-09T21:21:45.782802+0000 mgr.y (mgr.24416) 218 : cluster [DBG] pgmap v354: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 442 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:46.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:46 vm07 bash[28052]: cluster 2026-03-09T21:21:45.782802+0000 mgr.y (mgr.24416) 218 : cluster [DBG] pgmap v354: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 442 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:47.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:47 vm07 bash[20771]: audit 2026-03-09T21:21:46.422140+0000 mgr.y (mgr.24416) 219 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:47.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:47 vm07 bash[20771]: audit 2026-03-09T21:21:46.422140+0000 mgr.y (mgr.24416) 219 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:47.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:47 vm07 bash[20771]: cluster 2026-03-09T21:21:46.542063+0000 mon.a (mon.0) 1143 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-09T21:21:47.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:47 vm07 bash[20771]: cluster 2026-03-09T21:21:46.542063+0000 mon.a (mon.0) 1143 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-09T21:21:47.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:47 vm07 bash[28052]: audit 2026-03-09T21:21:46.422140+0000 mgr.y (mgr.24416) 219 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:47.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:47 vm07 bash[28052]: audit 2026-03-09T21:21:46.422140+0000 mgr.y (mgr.24416) 219 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:47.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:47 vm07 bash[28052]: cluster 2026-03-09T21:21:46.542063+0000 mon.a (mon.0) 1143 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-09T21:21:47.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:47 vm07 bash[28052]: cluster 2026-03-09T21:21:46.542063+0000 mon.a (mon.0) 1143 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-09T21:21:47.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:47 vm10 bash[23387]: audit 2026-03-09T21:21:46.422140+0000 mgr.y (mgr.24416) 219 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:47.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:47 vm10 bash[23387]: audit 2026-03-09T21:21:46.422140+0000 mgr.y (mgr.24416) 219 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:47.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:47 vm10 bash[23387]: cluster 2026-03-09T21:21:46.542063+0000 mon.a (mon.0) 1143 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-09T21:21:47.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:47 vm10 bash[23387]: cluster 2026-03-09T21:21:46.542063+0000 mon.a (mon.0) 1143 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-09T21:21:48.865 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:21:48 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:21:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:21:48.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:48 vm07 bash[20771]: cluster 2026-03-09T21:21:47.559351+0000 mon.a (mon.0) 1144 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-09T21:21:48.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:48 vm07 bash[20771]: cluster 2026-03-09T21:21:47.559351+0000 mon.a (mon.0) 1144 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-09T21:21:48.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:48 vm07 bash[20771]: cluster 2026-03-09T21:21:47.783265+0000 mgr.y (mgr.24416) 220 : cluster [DBG] pgmap v357: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:48.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:48 vm07 bash[20771]: cluster 2026-03-09T21:21:47.783265+0000 mgr.y (mgr.24416) 220 : cluster [DBG] pgmap v357: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:48.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:48 vm07 bash[20771]: cluster 2026-03-09T21:21:48.562156+0000 mon.a (mon.0) 1145 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-09T21:21:48.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:48 vm07 bash[20771]: cluster 2026-03-09T21:21:48.562156+0000 mon.a (mon.0) 1145 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-09T21:21:48.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:48 vm07 bash[28052]: cluster 2026-03-09T21:21:47.559351+0000 mon.a (mon.0) 1144 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-09T21:21:48.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:48 vm07 bash[28052]: cluster 2026-03-09T21:21:47.559351+0000 mon.a (mon.0) 1144 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-09T21:21:48.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:48 vm07 bash[28052]: cluster 2026-03-09T21:21:47.783265+0000 mgr.y (mgr.24416) 220 : cluster [DBG] pgmap v357: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:48.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:48 vm07 bash[28052]: cluster 2026-03-09T21:21:47.783265+0000 mgr.y (mgr.24416) 220 : cluster [DBG] pgmap v357: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:48.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:48 vm07 bash[28052]: cluster 2026-03-09T21:21:48.562156+0000 mon.a (mon.0) 1145 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-09T21:21:48.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:48 vm07 bash[28052]: cluster 2026-03-09T21:21:48.562156+0000 mon.a (mon.0) 1145 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-09T21:21:48.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:48 vm10 bash[23387]: cluster 2026-03-09T21:21:47.559351+0000 mon.a (mon.0) 1144 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-09T21:21:48.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:48 vm10 bash[23387]: cluster 2026-03-09T21:21:47.559351+0000 mon.a (mon.0) 1144 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-09T21:21:48.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:48 vm10 bash[23387]: cluster 2026-03-09T21:21:47.783265+0000 mgr.y (mgr.24416) 220 : cluster [DBG] pgmap v357: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:48.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:48 vm10 bash[23387]: cluster 2026-03-09T21:21:47.783265+0000 mgr.y (mgr.24416) 220 : cluster [DBG] pgmap v357: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:48.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:48 vm10 bash[23387]: cluster 2026-03-09T21:21:48.562156+0000 mon.a (mon.0) 1145 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-09T21:21:48.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:48 vm10 bash[23387]: cluster 2026-03-09T21:21:48.562156+0000 mon.a (mon.0) 1145 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-09T21:21:49.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:49 vm07 bash[20771]: audit 2026-03-09T21:21:48.616160+0000 mon.a (mon.0) 1146 : audit [INF] from='client.? 192.168.123.107:0/2588531159' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:49.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:49 vm07 bash[20771]: audit 2026-03-09T21:21:48.616160+0000 mon.a (mon.0) 1146 : audit [INF] from='client.? 192.168.123.107:0/2588531159' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:49.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:49 vm07 bash[20771]: audit 2026-03-09T21:21:49.563249+0000 mon.a (mon.0) 1147 : audit [INF] from='client.? 192.168.123.107:0/2588531159' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:49.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:49 vm07 bash[20771]: audit 2026-03-09T21:21:49.563249+0000 mon.a (mon.0) 1147 : audit [INF] from='client.? 192.168.123.107:0/2588531159' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:49.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:49 vm07 bash[20771]: cluster 2026-03-09T21:21:49.567727+0000 mon.a (mon.0) 1148 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-09T21:21:49.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:49 vm07 bash[20771]: cluster 2026-03-09T21:21:49.567727+0000 mon.a (mon.0) 1148 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-09T21:21:49.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:49 vm07 bash[28052]: audit 2026-03-09T21:21:48.616160+0000 mon.a (mon.0) 1146 : audit [INF] from='client.? 192.168.123.107:0/2588531159' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:49.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:49 vm07 bash[28052]: audit 2026-03-09T21:21:48.616160+0000 mon.a (mon.0) 1146 : audit [INF] from='client.? 192.168.123.107:0/2588531159' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:49.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:49 vm07 bash[28052]: audit 2026-03-09T21:21:49.563249+0000 mon.a (mon.0) 1147 : audit [INF] from='client.? 192.168.123.107:0/2588531159' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:49.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:49 vm07 bash[28052]: audit 2026-03-09T21:21:49.563249+0000 mon.a (mon.0) 1147 : audit [INF] from='client.? 192.168.123.107:0/2588531159' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:49.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:49 vm07 bash[28052]: cluster 2026-03-09T21:21:49.567727+0000 mon.a (mon.0) 1148 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-09T21:21:49.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:49 vm07 bash[28052]: cluster 2026-03-09T21:21:49.567727+0000 mon.a (mon.0) 1148 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-09T21:21:49.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:49 vm10 bash[23387]: audit 2026-03-09T21:21:48.616160+0000 mon.a (mon.0) 1146 : audit [INF] from='client.? 192.168.123.107:0/2588531159' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:49.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:49 vm10 bash[23387]: audit 2026-03-09T21:21:48.616160+0000 mon.a (mon.0) 1146 : audit [INF] from='client.? 192.168.123.107:0/2588531159' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:49.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:49 vm10 bash[23387]: audit 2026-03-09T21:21:49.563249+0000 mon.a (mon.0) 1147 : audit [INF] from='client.? 192.168.123.107:0/2588531159' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:49.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:49 vm10 bash[23387]: audit 2026-03-09T21:21:49.563249+0000 mon.a (mon.0) 1147 : audit [INF] from='client.? 192.168.123.107:0/2588531159' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:49.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:49 vm10 bash[23387]: cluster 2026-03-09T21:21:49.567727+0000 mon.a (mon.0) 1148 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-09T21:21:49.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:49 vm10 bash[23387]: cluster 2026-03-09T21:21:49.567727+0000 mon.a (mon.0) 1148 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-09T21:21:50.573 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_rmxattr PASSED [ 69%] 2026-03-09T21:21:50.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:50 vm07 bash[20771]: cluster 2026-03-09T21:21:49.783639+0000 mgr.y (mgr.24416) 221 : cluster [DBG] pgmap v360: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:50.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:50 vm07 bash[20771]: cluster 2026-03-09T21:21:49.783639+0000 mgr.y (mgr.24416) 221 : cluster [DBG] pgmap v360: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:50.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:50 vm07 bash[20771]: cluster 2026-03-09T21:21:50.569256+0000 mon.a (mon.0) 1149 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-09T21:21:50.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:50 vm07 bash[20771]: cluster 2026-03-09T21:21:50.569256+0000 mon.a (mon.0) 1149 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-09T21:21:50.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:50 vm07 bash[28052]: cluster 2026-03-09T21:21:49.783639+0000 mgr.y (mgr.24416) 221 : cluster [DBG] pgmap v360: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:50.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:50 vm07 bash[28052]: cluster 2026-03-09T21:21:49.783639+0000 mgr.y (mgr.24416) 221 : cluster [DBG] pgmap v360: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:50.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:50 vm07 bash[28052]: cluster 2026-03-09T21:21:50.569256+0000 mon.a (mon.0) 1149 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-09T21:21:50.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:50 vm07 bash[28052]: cluster 2026-03-09T21:21:50.569256+0000 mon.a (mon.0) 1149 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-09T21:21:50.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:50 vm10 bash[23387]: cluster 2026-03-09T21:21:49.783639+0000 mgr.y (mgr.24416) 221 : cluster [DBG] pgmap v360: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:50.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:50 vm10 bash[23387]: cluster 2026-03-09T21:21:49.783639+0000 mgr.y (mgr.24416) 221 : cluster [DBG] pgmap v360: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:50.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:50 vm10 bash[23387]: cluster 2026-03-09T21:21:50.569256+0000 mon.a (mon.0) 1149 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-09T21:21:50.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:50 vm10 bash[23387]: cluster 2026-03-09T21:21:50.569256+0000 mon.a (mon.0) 1149 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-09T21:21:52.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:52 vm07 bash[20771]: cluster 2026-03-09T21:21:51.577259+0000 mon.a (mon.0) 1150 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-09T21:21:52.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:52 vm07 bash[20771]: cluster 2026-03-09T21:21:51.577259+0000 mon.a (mon.0) 1150 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-09T21:21:52.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:52 vm07 bash[20771]: cluster 2026-03-09T21:21:51.783961+0000 mgr.y (mgr.24416) 222 : cluster [DBG] pgmap v363: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:52.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:52 vm07 bash[20771]: cluster 2026-03-09T21:21:51.783961+0000 mgr.y (mgr.24416) 222 : cluster [DBG] pgmap v363: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:52.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:52 vm07 bash[28052]: cluster 2026-03-09T21:21:51.577259+0000 mon.a (mon.0) 1150 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-09T21:21:52.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:52 vm07 bash[28052]: cluster 2026-03-09T21:21:51.577259+0000 mon.a (mon.0) 1150 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-09T21:21:52.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:52 vm07 bash[28052]: cluster 2026-03-09T21:21:51.783961+0000 mgr.y (mgr.24416) 222 : cluster [DBG] pgmap v363: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:52.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:52 vm07 bash[28052]: cluster 2026-03-09T21:21:51.783961+0000 mgr.y (mgr.24416) 222 : cluster [DBG] pgmap v363: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:52.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:52 vm10 bash[23387]: cluster 2026-03-09T21:21:51.577259+0000 mon.a (mon.0) 1150 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-09T21:21:52.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:52 vm10 bash[23387]: cluster 2026-03-09T21:21:51.577259+0000 mon.a (mon.0) 1150 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-09T21:21:52.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:52 vm10 bash[23387]: cluster 2026-03-09T21:21:51.783961+0000 mgr.y (mgr.24416) 222 : cluster [DBG] pgmap v363: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:52.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:52 vm10 bash[23387]: cluster 2026-03-09T21:21:51.783961+0000 mgr.y (mgr.24416) 222 : cluster [DBG] pgmap v363: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:21:53.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:53 vm07 bash[20771]: cluster 2026-03-09T21:21:52.578370+0000 mon.a (mon.0) 1151 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-09T21:21:53.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:53 vm07 bash[20771]: cluster 2026-03-09T21:21:52.578370+0000 mon.a (mon.0) 1151 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-09T21:21:53.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:53 vm07 bash[20771]: audit 2026-03-09T21:21:52.632579+0000 mon.b (mon.1) 61 : audit [INF] from='client.? 192.168.123.107:0/1915221751' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:53.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:53 vm07 bash[20771]: audit 2026-03-09T21:21:52.632579+0000 mon.b (mon.1) 61 : audit [INF] from='client.? 192.168.123.107:0/1915221751' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:53.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:53 vm07 bash[20771]: audit 2026-03-09T21:21:52.633209+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:53.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:53 vm07 bash[20771]: audit 2026-03-09T21:21:52.633209+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:53.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:53 vm07 bash[28052]: cluster 2026-03-09T21:21:52.578370+0000 mon.a (mon.0) 1151 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-09T21:21:53.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:53 vm07 bash[28052]: cluster 2026-03-09T21:21:52.578370+0000 mon.a (mon.0) 1151 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-09T21:21:53.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:53 vm07 bash[28052]: audit 2026-03-09T21:21:52.632579+0000 mon.b (mon.1) 61 : audit [INF] from='client.? 192.168.123.107:0/1915221751' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:53.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:53 vm07 bash[28052]: audit 2026-03-09T21:21:52.632579+0000 mon.b (mon.1) 61 : audit [INF] from='client.? 192.168.123.107:0/1915221751' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:53.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:53 vm07 bash[28052]: audit 2026-03-09T21:21:52.633209+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:53.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:53 vm07 bash[28052]: audit 2026-03-09T21:21:52.633209+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:53.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:53 vm10 bash[23387]: cluster 2026-03-09T21:21:52.578370+0000 mon.a (mon.0) 1151 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-09T21:21:53.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:53 vm10 bash[23387]: cluster 2026-03-09T21:21:52.578370+0000 mon.a (mon.0) 1151 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-09T21:21:53.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:53 vm10 bash[23387]: audit 2026-03-09T21:21:52.632579+0000 mon.b (mon.1) 61 : audit [INF] from='client.? 192.168.123.107:0/1915221751' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:53.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:53 vm10 bash[23387]: audit 2026-03-09T21:21:52.632579+0000 mon.b (mon.1) 61 : audit [INF] from='client.? 192.168.123.107:0/1915221751' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:53.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:53 vm10 bash[23387]: audit 2026-03-09T21:21:52.633209+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:53.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:53 vm10 bash[23387]: audit 2026-03-09T21:21:52.633209+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:54.619 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_write_no_comp_ref PASSED [ 70%] 2026-03-09T21:21:54.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:54 vm10 bash[23387]: audit 2026-03-09T21:21:53.595002+0000 mon.a (mon.0) 1153 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:54.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:54 vm10 bash[23387]: audit 2026-03-09T21:21:53.595002+0000 mon.a (mon.0) 1153 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:54.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:54 vm10 bash[23387]: cluster 2026-03-09T21:21:53.598360+0000 mon.a (mon.0) 1154 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-09T21:21:54.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:54 vm10 bash[23387]: cluster 2026-03-09T21:21:53.598360+0000 mon.a (mon.0) 1154 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-09T21:21:54.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:54 vm10 bash[23387]: cluster 2026-03-09T21:21:53.784304+0000 mgr.y (mgr.24416) 223 : cluster [DBG] pgmap v366: 196 pgs: 196 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:21:54.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:54 vm10 bash[23387]: cluster 2026-03-09T21:21:53.784304+0000 mgr.y (mgr.24416) 223 : cluster [DBG] pgmap v366: 196 pgs: 196 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:21:55.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:54 vm07 bash[20771]: audit 2026-03-09T21:21:53.595002+0000 mon.a (mon.0) 1153 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:55.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:54 vm07 bash[20771]: audit 2026-03-09T21:21:53.595002+0000 mon.a (mon.0) 1153 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:55.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:54 vm07 bash[20771]: cluster 2026-03-09T21:21:53.598360+0000 mon.a (mon.0) 1154 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-09T21:21:55.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:54 vm07 bash[20771]: cluster 2026-03-09T21:21:53.598360+0000 mon.a (mon.0) 1154 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-09T21:21:55.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:54 vm07 bash[20771]: cluster 2026-03-09T21:21:53.784304+0000 mgr.y (mgr.24416) 223 : cluster [DBG] pgmap v366: 196 pgs: 196 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:21:55.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:54 vm07 bash[20771]: cluster 2026-03-09T21:21:53.784304+0000 mgr.y (mgr.24416) 223 : cluster [DBG] pgmap v366: 196 pgs: 196 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:21:55.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:54 vm07 bash[28052]: audit 2026-03-09T21:21:53.595002+0000 mon.a (mon.0) 1153 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:55.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:54 vm07 bash[28052]: audit 2026-03-09T21:21:53.595002+0000 mon.a (mon.0) 1153 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:55.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:54 vm07 bash[28052]: cluster 2026-03-09T21:21:53.598360+0000 mon.a (mon.0) 1154 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-09T21:21:55.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:54 vm07 bash[28052]: cluster 2026-03-09T21:21:53.598360+0000 mon.a (mon.0) 1154 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-09T21:21:55.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:54 vm07 bash[28052]: cluster 2026-03-09T21:21:53.784304+0000 mgr.y (mgr.24416) 223 : cluster [DBG] pgmap v366: 196 pgs: 196 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:21:55.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:54 vm07 bash[28052]: cluster 2026-03-09T21:21:53.784304+0000 mgr.y (mgr.24416) 223 : cluster [DBG] pgmap v366: 196 pgs: 196 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:21:55.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:55 vm10 bash[23387]: cluster 2026-03-09T21:21:54.610005+0000 mon.a (mon.0) 1155 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-09T21:21:55.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:55 vm10 bash[23387]: cluster 2026-03-09T21:21:54.610005+0000 mon.a (mon.0) 1155 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-09T21:21:56.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:55 vm07 bash[20771]: cluster 2026-03-09T21:21:54.610005+0000 mon.a (mon.0) 1155 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-09T21:21:56.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:55 vm07 bash[20771]: cluster 2026-03-09T21:21:54.610005+0000 mon.a (mon.0) 1155 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-09T21:21:56.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:55 vm07 bash[28052]: cluster 2026-03-09T21:21:54.610005+0000 mon.a (mon.0) 1155 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-09T21:21:56.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:55 vm07 bash[28052]: cluster 2026-03-09T21:21:54.610005+0000 mon.a (mon.0) 1155 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-09T21:21:56.676 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:21:56 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:21:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:56 vm10 bash[23387]: cluster 2026-03-09T21:21:55.630782+0000 mon.a (mon.0) 1156 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-09T21:21:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:56 vm10 bash[23387]: cluster 2026-03-09T21:21:55.630782+0000 mon.a (mon.0) 1156 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-09T21:21:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:56 vm10 bash[23387]: cluster 2026-03-09T21:21:55.784586+0000 mgr.y (mgr.24416) 224 : cluster [DBG] pgmap v369: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:56 vm10 bash[23387]: cluster 2026-03-09T21:21:55.784586+0000 mgr.y (mgr.24416) 224 : cluster [DBG] pgmap v369: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:56 vm10 bash[23387]: cluster 2026-03-09T21:21:56.634161+0000 mon.a (mon.0) 1157 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-09T21:21:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:56 vm10 bash[23387]: cluster 2026-03-09T21:21:56.634161+0000 mon.a (mon.0) 1157 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-09T21:21:57.114 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:56 vm07 bash[20771]: cluster 2026-03-09T21:21:55.630782+0000 mon.a (mon.0) 1156 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-09T21:21:57.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:56 vm07 bash[20771]: cluster 2026-03-09T21:21:55.630782+0000 mon.a (mon.0) 1156 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-09T21:21:57.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:56 vm07 bash[20771]: cluster 2026-03-09T21:21:55.784586+0000 mgr.y (mgr.24416) 224 : cluster [DBG] pgmap v369: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:57.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:56 vm07 bash[20771]: cluster 2026-03-09T21:21:55.784586+0000 mgr.y (mgr.24416) 224 : cluster [DBG] pgmap v369: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:57.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:56 vm07 bash[20771]: cluster 2026-03-09T21:21:56.634161+0000 mon.a (mon.0) 1157 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-09T21:21:57.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:56 vm07 bash[20771]: cluster 2026-03-09T21:21:56.634161+0000 mon.a (mon.0) 1157 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-09T21:21:57.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:56 vm07 bash[28052]: cluster 2026-03-09T21:21:55.630782+0000 mon.a (mon.0) 1156 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-09T21:21:57.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:56 vm07 bash[28052]: cluster 2026-03-09T21:21:55.630782+0000 mon.a (mon.0) 1156 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-09T21:21:57.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:56 vm07 bash[28052]: cluster 2026-03-09T21:21:55.784586+0000 mgr.y (mgr.24416) 224 : cluster [DBG] pgmap v369: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:57.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:56 vm07 bash[28052]: cluster 2026-03-09T21:21:55.784586+0000 mgr.y (mgr.24416) 224 : cluster [DBG] pgmap v369: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:21:57.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:56 vm07 bash[28052]: cluster 2026-03-09T21:21:56.634161+0000 mon.a (mon.0) 1157 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-09T21:21:57.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:56 vm07 bash[28052]: cluster 2026-03-09T21:21:56.634161+0000 mon.a (mon.0) 1157 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-09T21:21:57.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:57 vm10 bash[23387]: audit 2026-03-09T21:21:56.422983+0000 mgr.y (mgr.24416) 225 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:57.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:57 vm10 bash[23387]: audit 2026-03-09T21:21:56.422983+0000 mgr.y (mgr.24416) 225 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:57.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:57 vm10 bash[23387]: audit 2026-03-09T21:21:56.697932+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.107:0/1180450947' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:57.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:57 vm10 bash[23387]: audit 2026-03-09T21:21:56.697932+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.107:0/1180450947' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:57.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:57 vm10 bash[23387]: audit 2026-03-09T21:21:56.698234+0000 mon.a (mon.0) 1158 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:57.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:57 vm10 bash[23387]: audit 2026-03-09T21:21:56.698234+0000 mon.a (mon.0) 1158 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:57.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:57 vm10 bash[23387]: audit 2026-03-09T21:21:57.027446+0000 mon.c (mon.2) 104 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:21:57.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:57 vm10 bash[23387]: audit 2026-03-09T21:21:57.027446+0000 mon.c (mon.2) 104 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:21:57.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:57 vm10 bash[23387]: audit 2026-03-09T21:21:57.634263+0000 mon.a (mon.0) 1159 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:57.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:57 vm10 bash[23387]: audit 2026-03-09T21:21:57.634263+0000 mon.a (mon.0) 1159 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:57.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:57 vm10 bash[23387]: cluster 2026-03-09T21:21:57.637132+0000 mon.a (mon.0) 1160 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-09T21:21:57.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:57 vm10 bash[23387]: cluster 2026-03-09T21:21:57.637132+0000 mon.a (mon.0) 1160 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-09T21:21:58.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:57 vm07 bash[20771]: audit 2026-03-09T21:21:56.422983+0000 mgr.y (mgr.24416) 225 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:58.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:57 vm07 bash[20771]: audit 2026-03-09T21:21:56.422983+0000 mgr.y (mgr.24416) 225 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:58.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:57 vm07 bash[20771]: audit 2026-03-09T21:21:56.697932+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.107:0/1180450947' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:58.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:57 vm07 bash[20771]: audit 2026-03-09T21:21:56.697932+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.107:0/1180450947' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:58.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:57 vm07 bash[20771]: audit 2026-03-09T21:21:56.698234+0000 mon.a (mon.0) 1158 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:58.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:57 vm07 bash[20771]: audit 2026-03-09T21:21:56.698234+0000 mon.a (mon.0) 1158 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:58.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:57 vm07 bash[20771]: audit 2026-03-09T21:21:57.027446+0000 mon.c (mon.2) 104 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:21:58.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:57 vm07 bash[20771]: audit 2026-03-09T21:21:57.027446+0000 mon.c (mon.2) 104 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:21:58.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:57 vm07 bash[20771]: audit 2026-03-09T21:21:57.634263+0000 mon.a (mon.0) 1159 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:58.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:57 vm07 bash[20771]: audit 2026-03-09T21:21:57.634263+0000 mon.a (mon.0) 1159 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:58.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:57 vm07 bash[20771]: cluster 2026-03-09T21:21:57.637132+0000 mon.a (mon.0) 1160 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-09T21:21:58.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:57 vm07 bash[20771]: cluster 2026-03-09T21:21:57.637132+0000 mon.a (mon.0) 1160 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-09T21:21:58.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:57 vm07 bash[28052]: audit 2026-03-09T21:21:56.422983+0000 mgr.y (mgr.24416) 225 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:58.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:57 vm07 bash[28052]: audit 2026-03-09T21:21:56.422983+0000 mgr.y (mgr.24416) 225 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:21:58.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:57 vm07 bash[28052]: audit 2026-03-09T21:21:56.697932+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.107:0/1180450947' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:58.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:57 vm07 bash[28052]: audit 2026-03-09T21:21:56.697932+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.107:0/1180450947' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:58.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:57 vm07 bash[28052]: audit 2026-03-09T21:21:56.698234+0000 mon.a (mon.0) 1158 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:58.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:57 vm07 bash[28052]: audit 2026-03-09T21:21:56.698234+0000 mon.a (mon.0) 1158 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:21:58.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:57 vm07 bash[28052]: audit 2026-03-09T21:21:57.027446+0000 mon.c (mon.2) 104 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:21:58.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:57 vm07 bash[28052]: audit 2026-03-09T21:21:57.027446+0000 mon.c (mon.2) 104 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:21:58.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:57 vm07 bash[28052]: audit 2026-03-09T21:21:57.634263+0000 mon.a (mon.0) 1159 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:58.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:57 vm07 bash[28052]: audit 2026-03-09T21:21:57.634263+0000 mon.a (mon.0) 1159 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:21:58.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:57 vm07 bash[28052]: cluster 2026-03-09T21:21:57.637132+0000 mon.a (mon.0) 1160 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-09T21:21:58.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:57 vm07 bash[28052]: cluster 2026-03-09T21:21:57.637132+0000 mon.a (mon.0) 1160 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-09T21:21:58.649 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_append PASSED [ 71%] 2026-03-09T21:21:58.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:58 vm10 bash[23387]: cluster 2026-03-09T21:21:57.785130+0000 mgr.y (mgr.24416) 226 : cluster [DBG] pgmap v372: 196 pgs: 196 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:21:58.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:58 vm10 bash[23387]: cluster 2026-03-09T21:21:57.785130+0000 mgr.y (mgr.24416) 226 : cluster [DBG] pgmap v372: 196 pgs: 196 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:21:58.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:58 vm10 bash[23387]: cluster 2026-03-09T21:21:58.640532+0000 mon.a (mon.0) 1161 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-09T21:21:58.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:21:58 vm10 bash[23387]: cluster 2026-03-09T21:21:58.640532+0000 mon.a (mon.0) 1161 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-09T21:21:59.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:58 vm07 bash[20771]: cluster 2026-03-09T21:21:57.785130+0000 mgr.y (mgr.24416) 226 : cluster [DBG] pgmap v372: 196 pgs: 196 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:21:59.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:58 vm07 bash[20771]: cluster 2026-03-09T21:21:57.785130+0000 mgr.y (mgr.24416) 226 : cluster [DBG] pgmap v372: 196 pgs: 196 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:21:59.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:58 vm07 bash[20771]: cluster 2026-03-09T21:21:58.640532+0000 mon.a (mon.0) 1161 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-09T21:21:59.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:21:58 vm07 bash[20771]: cluster 2026-03-09T21:21:58.640532+0000 mon.a (mon.0) 1161 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-09T21:21:59.115 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:21:58 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:21:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:21:59.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:58 vm07 bash[28052]: cluster 2026-03-09T21:21:57.785130+0000 mgr.y (mgr.24416) 226 : cluster [DBG] pgmap v372: 196 pgs: 196 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:21:59.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:58 vm07 bash[28052]: cluster 2026-03-09T21:21:57.785130+0000 mgr.y (mgr.24416) 226 : cluster [DBG] pgmap v372: 196 pgs: 196 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:21:59.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:58 vm07 bash[28052]: cluster 2026-03-09T21:21:58.640532+0000 mon.a (mon.0) 1161 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-09T21:21:59.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:21:58 vm07 bash[28052]: cluster 2026-03-09T21:21:58.640532+0000 mon.a (mon.0) 1161 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-09T21:22:00.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:00 vm10 bash[23387]: cluster 2026-03-09T21:21:59.648680+0000 mon.a (mon.0) 1162 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-09T21:22:00.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:00 vm10 bash[23387]: cluster 2026-03-09T21:21:59.648680+0000 mon.a (mon.0) 1162 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-09T21:22:00.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:00 vm10 bash[23387]: cluster 2026-03-09T21:21:59.785399+0000 mgr.y (mgr.24416) 227 : cluster [DBG] pgmap v375: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:00.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:00 vm10 bash[23387]: cluster 2026-03-09T21:21:59.785399+0000 mgr.y (mgr.24416) 227 : cluster [DBG] pgmap v375: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:01.114 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:00 vm07 bash[20771]: cluster 2026-03-09T21:21:59.648680+0000 mon.a (mon.0) 1162 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-09T21:22:01.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:00 vm07 bash[20771]: cluster 2026-03-09T21:21:59.648680+0000 mon.a (mon.0) 1162 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-09T21:22:01.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:00 vm07 bash[20771]: cluster 2026-03-09T21:21:59.785399+0000 mgr.y (mgr.24416) 227 : cluster [DBG] pgmap v375: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:01.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:00 vm07 bash[20771]: cluster 2026-03-09T21:21:59.785399+0000 mgr.y (mgr.24416) 227 : cluster [DBG] pgmap v375: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:01.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:00 vm07 bash[28052]: cluster 2026-03-09T21:21:59.648680+0000 mon.a (mon.0) 1162 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-09T21:22:01.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:00 vm07 bash[28052]: cluster 2026-03-09T21:21:59.648680+0000 mon.a (mon.0) 1162 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-09T21:22:01.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:00 vm07 bash[28052]: cluster 2026-03-09T21:21:59.785399+0000 mgr.y (mgr.24416) 227 : cluster [DBG] pgmap v375: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:01.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:00 vm07 bash[28052]: cluster 2026-03-09T21:21:59.785399+0000 mgr.y (mgr.24416) 227 : cluster [DBG] pgmap v375: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:01.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:01 vm10 bash[23387]: cluster 2026-03-09T21:22:00.666513+0000 mon.a (mon.0) 1163 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-09T21:22:01.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:01 vm10 bash[23387]: cluster 2026-03-09T21:22:00.666513+0000 mon.a (mon.0) 1163 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-09T21:22:01.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:01 vm10 bash[23387]: audit 2026-03-09T21:22:00.924757+0000 mon.c (mon.2) 105 : audit [INF] from='client.? 192.168.123.107:0/1163052644' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:01.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:01 vm10 bash[23387]: audit 2026-03-09T21:22:00.924757+0000 mon.c (mon.2) 105 : audit [INF] from='client.? 192.168.123.107:0/1163052644' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:01.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:01 vm10 bash[23387]: audit 2026-03-09T21:22:00.924954+0000 mon.a (mon.0) 1164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:01.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:01 vm10 bash[23387]: audit 2026-03-09T21:22:00.924954+0000 mon.a (mon.0) 1164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:02.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:01 vm07 bash[20771]: cluster 2026-03-09T21:22:00.666513+0000 mon.a (mon.0) 1163 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-09T21:22:02.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:01 vm07 bash[20771]: cluster 2026-03-09T21:22:00.666513+0000 mon.a (mon.0) 1163 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-09T21:22:02.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:01 vm07 bash[20771]: audit 2026-03-09T21:22:00.924757+0000 mon.c (mon.2) 105 : audit [INF] from='client.? 192.168.123.107:0/1163052644' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:02.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:01 vm07 bash[20771]: audit 2026-03-09T21:22:00.924757+0000 mon.c (mon.2) 105 : audit [INF] from='client.? 192.168.123.107:0/1163052644' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:02.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:01 vm07 bash[20771]: audit 2026-03-09T21:22:00.924954+0000 mon.a (mon.0) 1164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:02.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:01 vm07 bash[20771]: audit 2026-03-09T21:22:00.924954+0000 mon.a (mon.0) 1164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:02.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:01 vm07 bash[28052]: cluster 2026-03-09T21:22:00.666513+0000 mon.a (mon.0) 1163 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-09T21:22:02.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:01 vm07 bash[28052]: cluster 2026-03-09T21:22:00.666513+0000 mon.a (mon.0) 1163 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-09T21:22:02.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:01 vm07 bash[28052]: audit 2026-03-09T21:22:00.924757+0000 mon.c (mon.2) 105 : audit [INF] from='client.? 192.168.123.107:0/1163052644' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:02.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:01 vm07 bash[28052]: audit 2026-03-09T21:22:00.924757+0000 mon.c (mon.2) 105 : audit [INF] from='client.? 192.168.123.107:0/1163052644' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:02.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:01 vm07 bash[28052]: audit 2026-03-09T21:22:00.924954+0000 mon.a (mon.0) 1164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:02.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:01 vm07 bash[28052]: audit 2026-03-09T21:22:00.924954+0000 mon.a (mon.0) 1164 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:02.664 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_write_full PASSED [ 72%] 2026-03-09T21:22:02.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:02 vm10 bash[23387]: audit 2026-03-09T21:22:01.653390+0000 mon.a (mon.0) 1165 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:02.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:02 vm10 bash[23387]: audit 2026-03-09T21:22:01.653390+0000 mon.a (mon.0) 1165 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:02.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:02 vm10 bash[23387]: cluster 2026-03-09T21:22:01.656652+0000 mon.a (mon.0) 1166 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-09T21:22:02.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:02 vm10 bash[23387]: cluster 2026-03-09T21:22:01.656652+0000 mon.a (mon.0) 1166 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-09T21:22:02.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:02 vm10 bash[23387]: cluster 2026-03-09T21:22:01.785679+0000 mgr.y (mgr.24416) 228 : cluster [DBG] pgmap v378: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:02.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:02 vm10 bash[23387]: cluster 2026-03-09T21:22:01.785679+0000 mgr.y (mgr.24416) 228 : cluster [DBG] pgmap v378: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:02.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:02 vm10 bash[23387]: cluster 2026-03-09T21:22:02.660291+0000 mon.a (mon.0) 1167 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-09T21:22:02.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:02 vm10 bash[23387]: cluster 2026-03-09T21:22:02.660291+0000 mon.a (mon.0) 1167 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-09T21:22:03.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:02 vm07 bash[20771]: audit 2026-03-09T21:22:01.653390+0000 mon.a (mon.0) 1165 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:03.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:02 vm07 bash[20771]: audit 2026-03-09T21:22:01.653390+0000 mon.a (mon.0) 1165 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:03.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:02 vm07 bash[20771]: cluster 2026-03-09T21:22:01.656652+0000 mon.a (mon.0) 1166 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-09T21:22:03.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:02 vm07 bash[20771]: cluster 2026-03-09T21:22:01.656652+0000 mon.a (mon.0) 1166 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-09T21:22:03.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:02 vm07 bash[20771]: cluster 2026-03-09T21:22:01.785679+0000 mgr.y (mgr.24416) 228 : cluster [DBG] pgmap v378: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:03.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:02 vm07 bash[20771]: cluster 2026-03-09T21:22:01.785679+0000 mgr.y (mgr.24416) 228 : cluster [DBG] pgmap v378: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:03.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:02 vm07 bash[20771]: cluster 2026-03-09T21:22:02.660291+0000 mon.a (mon.0) 1167 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-09T21:22:03.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:02 vm07 bash[20771]: cluster 2026-03-09T21:22:02.660291+0000 mon.a (mon.0) 1167 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-09T21:22:03.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:02 vm07 bash[28052]: audit 2026-03-09T21:22:01.653390+0000 mon.a (mon.0) 1165 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:03.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:02 vm07 bash[28052]: audit 2026-03-09T21:22:01.653390+0000 mon.a (mon.0) 1165 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:03.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:02 vm07 bash[28052]: cluster 2026-03-09T21:22:01.656652+0000 mon.a (mon.0) 1166 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-09T21:22:03.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:02 vm07 bash[28052]: cluster 2026-03-09T21:22:01.656652+0000 mon.a (mon.0) 1166 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-09T21:22:03.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:02 vm07 bash[28052]: cluster 2026-03-09T21:22:01.785679+0000 mgr.y (mgr.24416) 228 : cluster [DBG] pgmap v378: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:03.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:02 vm07 bash[28052]: cluster 2026-03-09T21:22:01.785679+0000 mgr.y (mgr.24416) 228 : cluster [DBG] pgmap v378: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:03.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:02 vm07 bash[28052]: cluster 2026-03-09T21:22:02.660291+0000 mon.a (mon.0) 1167 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-09T21:22:03.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:02 vm07 bash[28052]: cluster 2026-03-09T21:22:02.660291+0000 mon.a (mon.0) 1167 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-09T21:22:05.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:04 vm07 bash[20771]: cluster 2026-03-09T21:22:03.689218+0000 mon.a (mon.0) 1168 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-09T21:22:05.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:04 vm07 bash[20771]: cluster 2026-03-09T21:22:03.689218+0000 mon.a (mon.0) 1168 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-09T21:22:05.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:04 vm07 bash[20771]: cluster 2026-03-09T21:22:03.785920+0000 mgr.y (mgr.24416) 229 : cluster [DBG] pgmap v381: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:05.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:04 vm07 bash[20771]: cluster 2026-03-09T21:22:03.785920+0000 mgr.y (mgr.24416) 229 : cluster [DBG] pgmap v381: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:05.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:04 vm07 bash[28052]: cluster 2026-03-09T21:22:03.689218+0000 mon.a (mon.0) 1168 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-09T21:22:05.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:04 vm07 bash[28052]: cluster 2026-03-09T21:22:03.689218+0000 mon.a (mon.0) 1168 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-09T21:22:05.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:04 vm07 bash[28052]: cluster 2026-03-09T21:22:03.785920+0000 mgr.y (mgr.24416) 229 : cluster [DBG] pgmap v381: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:05.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:04 vm07 bash[28052]: cluster 2026-03-09T21:22:03.785920+0000 mgr.y (mgr.24416) 229 : cluster [DBG] pgmap v381: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:05.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:04 vm10 bash[23387]: cluster 2026-03-09T21:22:03.689218+0000 mon.a (mon.0) 1168 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-09T21:22:05.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:04 vm10 bash[23387]: cluster 2026-03-09T21:22:03.689218+0000 mon.a (mon.0) 1168 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-09T21:22:05.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:04 vm10 bash[23387]: cluster 2026-03-09T21:22:03.785920+0000 mgr.y (mgr.24416) 229 : cluster [DBG] pgmap v381: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:05.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:04 vm10 bash[23387]: cluster 2026-03-09T21:22:03.785920+0000 mgr.y (mgr.24416) 229 : cluster [DBG] pgmap v381: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:06.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:05 vm07 bash[20771]: cluster 2026-03-09T21:22:04.763883+0000 mon.a (mon.0) 1169 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-09T21:22:06.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:05 vm07 bash[20771]: cluster 2026-03-09T21:22:04.763883+0000 mon.a (mon.0) 1169 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-09T21:22:06.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:05 vm07 bash[20771]: audit 2026-03-09T21:22:04.809742+0000 mon.c (mon.2) 106 : audit [INF] from='client.? 192.168.123.107:0/337030833' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:06.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:05 vm07 bash[20771]: audit 2026-03-09T21:22:04.809742+0000 mon.c (mon.2) 106 : audit [INF] from='client.? 192.168.123.107:0/337030833' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:06.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:05 vm07 bash[20771]: audit 2026-03-09T21:22:04.810241+0000 mon.a (mon.0) 1170 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:06.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:05 vm07 bash[20771]: audit 2026-03-09T21:22:04.810241+0000 mon.a (mon.0) 1170 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:06.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:05 vm07 bash[28052]: cluster 2026-03-09T21:22:04.763883+0000 mon.a (mon.0) 1169 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-09T21:22:06.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:05 vm07 bash[28052]: cluster 2026-03-09T21:22:04.763883+0000 mon.a (mon.0) 1169 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-09T21:22:06.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:05 vm07 bash[28052]: audit 2026-03-09T21:22:04.809742+0000 mon.c (mon.2) 106 : audit [INF] from='client.? 192.168.123.107:0/337030833' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:06.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:05 vm07 bash[28052]: audit 2026-03-09T21:22:04.809742+0000 mon.c (mon.2) 106 : audit [INF] from='client.? 192.168.123.107:0/337030833' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:06.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:05 vm07 bash[28052]: audit 2026-03-09T21:22:04.810241+0000 mon.a (mon.0) 1170 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:06.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:05 vm07 bash[28052]: audit 2026-03-09T21:22:04.810241+0000 mon.a (mon.0) 1170 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:06.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:05 vm10 bash[23387]: cluster 2026-03-09T21:22:04.763883+0000 mon.a (mon.0) 1169 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-09T21:22:06.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:05 vm10 bash[23387]: cluster 2026-03-09T21:22:04.763883+0000 mon.a (mon.0) 1169 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-09T21:22:06.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:05 vm10 bash[23387]: audit 2026-03-09T21:22:04.809742+0000 mon.c (mon.2) 106 : audit [INF] from='client.? 192.168.123.107:0/337030833' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:06.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:05 vm10 bash[23387]: audit 2026-03-09T21:22:04.809742+0000 mon.c (mon.2) 106 : audit [INF] from='client.? 192.168.123.107:0/337030833' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:06.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:05 vm10 bash[23387]: audit 2026-03-09T21:22:04.810241+0000 mon.a (mon.0) 1170 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:06.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:05 vm10 bash[23387]: audit 2026-03-09T21:22:04.810241+0000 mon.a (mon.0) 1170 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:06.692 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:22:06 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:22:06.963 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_writesame PASSED [ 73%] 2026-03-09T21:22:07.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:06 vm07 bash[20771]: audit 2026-03-09T21:22:05.753909+0000 mon.a (mon.0) 1171 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:07.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:06 vm07 bash[20771]: audit 2026-03-09T21:22:05.753909+0000 mon.a (mon.0) 1171 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:07.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:06 vm07 bash[20771]: cluster 2026-03-09T21:22:05.765061+0000 mon.a (mon.0) 1172 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-09T21:22:07.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:06 vm07 bash[20771]: cluster 2026-03-09T21:22:05.765061+0000 mon.a (mon.0) 1172 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-09T21:22:07.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:06 vm07 bash[20771]: cluster 2026-03-09T21:22:05.786223+0000 mgr.y (mgr.24416) 230 : cluster [DBG] pgmap v384: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:07.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:06 vm07 bash[20771]: cluster 2026-03-09T21:22:05.786223+0000 mgr.y (mgr.24416) 230 : cluster [DBG] pgmap v384: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:07.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:06 vm07 bash[28052]: audit 2026-03-09T21:22:05.753909+0000 mon.a (mon.0) 1171 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:07.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:06 vm07 bash[28052]: audit 2026-03-09T21:22:05.753909+0000 mon.a (mon.0) 1171 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:07.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:06 vm07 bash[28052]: cluster 2026-03-09T21:22:05.765061+0000 mon.a (mon.0) 1172 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-09T21:22:07.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:06 vm07 bash[28052]: cluster 2026-03-09T21:22:05.765061+0000 mon.a (mon.0) 1172 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-09T21:22:07.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:06 vm07 bash[28052]: cluster 2026-03-09T21:22:05.786223+0000 mgr.y (mgr.24416) 230 : cluster [DBG] pgmap v384: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:07.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:06 vm07 bash[28052]: cluster 2026-03-09T21:22:05.786223+0000 mgr.y (mgr.24416) 230 : cluster [DBG] pgmap v384: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:07.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:06 vm10 bash[23387]: audit 2026-03-09T21:22:05.753909+0000 mon.a (mon.0) 1171 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:07.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:06 vm10 bash[23387]: audit 2026-03-09T21:22:05.753909+0000 mon.a (mon.0) 1171 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:07.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:06 vm10 bash[23387]: cluster 2026-03-09T21:22:05.765061+0000 mon.a (mon.0) 1172 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-09T21:22:07.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:06 vm10 bash[23387]: cluster 2026-03-09T21:22:05.765061+0000 mon.a (mon.0) 1172 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-09T21:22:07.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:06 vm10 bash[23387]: cluster 2026-03-09T21:22:05.786223+0000 mgr.y (mgr.24416) 230 : cluster [DBG] pgmap v384: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:07.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:06 vm10 bash[23387]: cluster 2026-03-09T21:22:05.786223+0000 mgr.y (mgr.24416) 230 : cluster [DBG] pgmap v384: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:08.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:07 vm07 bash[20771]: audit 2026-03-09T21:22:06.424629+0000 mgr.y (mgr.24416) 231 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:08.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:07 vm07 bash[20771]: audit 2026-03-09T21:22:06.424629+0000 mgr.y (mgr.24416) 231 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:08.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:07 vm07 bash[20771]: cluster 2026-03-09T21:22:06.949476+0000 mon.a (mon.0) 1173 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-09T21:22:08.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:07 vm07 bash[20771]: cluster 2026-03-09T21:22:06.949476+0000 mon.a (mon.0) 1173 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-09T21:22:08.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:07 vm07 bash[20771]: cluster 2026-03-09T21:22:07.786655+0000 mgr.y (mgr.24416) 232 : cluster [DBG] pgmap v386: 164 pgs: 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:08.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:07 vm07 bash[20771]: cluster 2026-03-09T21:22:07.786655+0000 mgr.y (mgr.24416) 232 : cluster [DBG] pgmap v386: 164 pgs: 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:08.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:07 vm07 bash[28052]: audit 2026-03-09T21:22:06.424629+0000 mgr.y (mgr.24416) 231 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:08.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:07 vm07 bash[28052]: audit 2026-03-09T21:22:06.424629+0000 mgr.y (mgr.24416) 231 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:08.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:07 vm07 bash[28052]: cluster 2026-03-09T21:22:06.949476+0000 mon.a (mon.0) 1173 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-09T21:22:08.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:07 vm07 bash[28052]: cluster 2026-03-09T21:22:06.949476+0000 mon.a (mon.0) 1173 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-09T21:22:08.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:07 vm07 bash[28052]: cluster 2026-03-09T21:22:07.786655+0000 mgr.y (mgr.24416) 232 : cluster [DBG] pgmap v386: 164 pgs: 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:08.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:07 vm07 bash[28052]: cluster 2026-03-09T21:22:07.786655+0000 mgr.y (mgr.24416) 232 : cluster [DBG] pgmap v386: 164 pgs: 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:08.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:07 vm10 bash[23387]: audit 2026-03-09T21:22:06.424629+0000 mgr.y (mgr.24416) 231 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:08.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:07 vm10 bash[23387]: audit 2026-03-09T21:22:06.424629+0000 mgr.y (mgr.24416) 231 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:08.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:07 vm10 bash[23387]: cluster 2026-03-09T21:22:06.949476+0000 mon.a (mon.0) 1173 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-09T21:22:08.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:07 vm10 bash[23387]: cluster 2026-03-09T21:22:06.949476+0000 mon.a (mon.0) 1173 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-09T21:22:08.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:07 vm10 bash[23387]: cluster 2026-03-09T21:22:07.786655+0000 mgr.y (mgr.24416) 232 : cluster [DBG] pgmap v386: 164 pgs: 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:08.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:07 vm10 bash[23387]: cluster 2026-03-09T21:22:07.786655+0000 mgr.y (mgr.24416) 232 : cluster [DBG] pgmap v386: 164 pgs: 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:08.975 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:22:08 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:22:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:22:09.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:08 vm07 bash[20771]: cluster 2026-03-09T21:22:07.963156+0000 mon.a (mon.0) 1174 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:22:09.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:08 vm07 bash[20771]: cluster 2026-03-09T21:22:07.963156+0000 mon.a (mon.0) 1174 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:22:09.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:08 vm07 bash[20771]: cluster 2026-03-09T21:22:07.985846+0000 mon.a (mon.0) 1175 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-09T21:22:09.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:08 vm07 bash[20771]: cluster 2026-03-09T21:22:07.985846+0000 mon.a (mon.0) 1175 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-09T21:22:09.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:08 vm07 bash[28052]: cluster 2026-03-09T21:22:07.963156+0000 mon.a (mon.0) 1174 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:22:09.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:08 vm07 bash[28052]: cluster 2026-03-09T21:22:07.963156+0000 mon.a (mon.0) 1174 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:22:09.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:08 vm07 bash[28052]: cluster 2026-03-09T21:22:07.985846+0000 mon.a (mon.0) 1175 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-09T21:22:09.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:08 vm07 bash[28052]: cluster 2026-03-09T21:22:07.985846+0000 mon.a (mon.0) 1175 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-09T21:22:09.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:08 vm10 bash[23387]: cluster 2026-03-09T21:22:07.963156+0000 mon.a (mon.0) 1174 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:22:09.460 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:08 vm10 bash[23387]: cluster 2026-03-09T21:22:07.963156+0000 mon.a (mon.0) 1174 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:22:09.460 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:08 vm10 bash[23387]: cluster 2026-03-09T21:22:07.985846+0000 mon.a (mon.0) 1175 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-09T21:22:09.460 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:08 vm10 bash[23387]: cluster 2026-03-09T21:22:07.985846+0000 mon.a (mon.0) 1175 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-09T21:22:10.364 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:10 vm07 bash[20771]: cluster 2026-03-09T21:22:08.995563+0000 mon.a (mon.0) 1176 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-09T21:22:10.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:10 vm07 bash[20771]: cluster 2026-03-09T21:22:08.995563+0000 mon.a (mon.0) 1176 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-09T21:22:10.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:10 vm07 bash[20771]: audit 2026-03-09T21:22:09.041025+0000 mon.a (mon.0) 1177 : audit [INF] from='client.? 192.168.123.107:0/3741344542' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:10.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:10 vm07 bash[20771]: audit 2026-03-09T21:22:09.041025+0000 mon.a (mon.0) 1177 : audit [INF] from='client.? 192.168.123.107:0/3741344542' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:10.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:10 vm07 bash[20771]: cluster 2026-03-09T21:22:09.786901+0000 mgr.y (mgr.24416) 233 : cluster [DBG] pgmap v389: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:10.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:10 vm07 bash[20771]: cluster 2026-03-09T21:22:09.786901+0000 mgr.y (mgr.24416) 233 : cluster [DBG] pgmap v389: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:10.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:10 vm07 bash[28052]: cluster 2026-03-09T21:22:08.995563+0000 mon.a (mon.0) 1176 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-09T21:22:10.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:10 vm07 bash[28052]: cluster 2026-03-09T21:22:08.995563+0000 mon.a (mon.0) 1176 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-09T21:22:10.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:10 vm07 bash[28052]: audit 2026-03-09T21:22:09.041025+0000 mon.a (mon.0) 1177 : audit [INF] from='client.? 192.168.123.107:0/3741344542' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:10.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:10 vm07 bash[28052]: audit 2026-03-09T21:22:09.041025+0000 mon.a (mon.0) 1177 : audit [INF] from='client.? 192.168.123.107:0/3741344542' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:10.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:10 vm07 bash[28052]: cluster 2026-03-09T21:22:09.786901+0000 mgr.y (mgr.24416) 233 : cluster [DBG] pgmap v389: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:10.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:10 vm07 bash[28052]: cluster 2026-03-09T21:22:09.786901+0000 mgr.y (mgr.24416) 233 : cluster [DBG] pgmap v389: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:10.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:10 vm10 bash[23387]: cluster 2026-03-09T21:22:08.995563+0000 mon.a (mon.0) 1176 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-09T21:22:10.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:10 vm10 bash[23387]: cluster 2026-03-09T21:22:08.995563+0000 mon.a (mon.0) 1176 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-09T21:22:10.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:10 vm10 bash[23387]: audit 2026-03-09T21:22:09.041025+0000 mon.a (mon.0) 1177 : audit [INF] from='client.? 192.168.123.107:0/3741344542' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:10.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:10 vm10 bash[23387]: audit 2026-03-09T21:22:09.041025+0000 mon.a (mon.0) 1177 : audit [INF] from='client.? 192.168.123.107:0/3741344542' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:10.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:10 vm10 bash[23387]: cluster 2026-03-09T21:22:09.786901+0000 mgr.y (mgr.24416) 233 : cluster [DBG] pgmap v389: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:10.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:10 vm10 bash[23387]: cluster 2026-03-09T21:22:09.786901+0000 mgr.y (mgr.24416) 233 : cluster [DBG] pgmap v389: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:11.044 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_stat PASSED [ 74%] 2026-03-09T21:22:11.364 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:11 vm07 bash[20771]: audit 2026-03-09T21:22:10.021653+0000 mon.a (mon.0) 1178 : audit [INF] from='client.? 192.168.123.107:0/3741344542' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:11.364 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:11 vm07 bash[20771]: audit 2026-03-09T21:22:10.021653+0000 mon.a (mon.0) 1178 : audit [INF] from='client.? 192.168.123.107:0/3741344542' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:11.364 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:11 vm07 bash[20771]: cluster 2026-03-09T21:22:10.032034+0000 mon.a (mon.0) 1179 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-09T21:22:11.364 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:11 vm07 bash[20771]: cluster 2026-03-09T21:22:10.032034+0000 mon.a (mon.0) 1179 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-09T21:22:11.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:11 vm07 bash[28052]: audit 2026-03-09T21:22:10.021653+0000 mon.a (mon.0) 1178 : audit [INF] from='client.? 192.168.123.107:0/3741344542' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:11.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:11 vm07 bash[28052]: audit 2026-03-09T21:22:10.021653+0000 mon.a (mon.0) 1178 : audit [INF] from='client.? 192.168.123.107:0/3741344542' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:11.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:11 vm07 bash[28052]: cluster 2026-03-09T21:22:10.032034+0000 mon.a (mon.0) 1179 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-09T21:22:11.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:11 vm07 bash[28052]: cluster 2026-03-09T21:22:10.032034+0000 mon.a (mon.0) 1179 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-09T21:22:11.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:11 vm10 bash[23387]: audit 2026-03-09T21:22:10.021653+0000 mon.a (mon.0) 1178 : audit [INF] from='client.? 192.168.123.107:0/3741344542' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:11.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:11 vm10 bash[23387]: audit 2026-03-09T21:22:10.021653+0000 mon.a (mon.0) 1178 : audit [INF] from='client.? 192.168.123.107:0/3741344542' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:11.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:11 vm10 bash[23387]: cluster 2026-03-09T21:22:10.032034+0000 mon.a (mon.0) 1179 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-09T21:22:11.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:11 vm10 bash[23387]: cluster 2026-03-09T21:22:10.032034+0000 mon.a (mon.0) 1179 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-09T21:22:12.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:12 vm07 bash[20771]: cluster 2026-03-09T21:22:11.037964+0000 mon.a (mon.0) 1180 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-09T21:22:12.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:12 vm07 bash[20771]: cluster 2026-03-09T21:22:11.037964+0000 mon.a (mon.0) 1180 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-09T21:22:12.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:12 vm07 bash[20771]: cluster 2026-03-09T21:22:11.787160+0000 mgr.y (mgr.24416) 234 : cluster [DBG] pgmap v392: 164 pgs: 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:12.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:12 vm07 bash[20771]: cluster 2026-03-09T21:22:11.787160+0000 mgr.y (mgr.24416) 234 : cluster [DBG] pgmap v392: 164 pgs: 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:12.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:12 vm07 bash[20771]: audit 2026-03-09T21:22:12.033916+0000 mon.c (mon.2) 107 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:22:12.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:12 vm07 bash[20771]: audit 2026-03-09T21:22:12.033916+0000 mon.c (mon.2) 107 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:22:12.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:12 vm07 bash[28052]: cluster 2026-03-09T21:22:11.037964+0000 mon.a (mon.0) 1180 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-09T21:22:12.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:12 vm07 bash[28052]: cluster 2026-03-09T21:22:11.037964+0000 mon.a (mon.0) 1180 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-09T21:22:12.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:12 vm07 bash[28052]: cluster 2026-03-09T21:22:11.787160+0000 mgr.y (mgr.24416) 234 : cluster [DBG] pgmap v392: 164 pgs: 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:12.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:12 vm07 bash[28052]: cluster 2026-03-09T21:22:11.787160+0000 mgr.y (mgr.24416) 234 : cluster [DBG] pgmap v392: 164 pgs: 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:12.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:12 vm07 bash[28052]: audit 2026-03-09T21:22:12.033916+0000 mon.c (mon.2) 107 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:22:12.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:12 vm07 bash[28052]: audit 2026-03-09T21:22:12.033916+0000 mon.c (mon.2) 107 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:22:12.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:12 vm10 bash[23387]: cluster 2026-03-09T21:22:11.037964+0000 mon.a (mon.0) 1180 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-09T21:22:12.473 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:12 vm10 bash[23387]: cluster 2026-03-09T21:22:11.037964+0000 mon.a (mon.0) 1180 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-09T21:22:12.473 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:12 vm10 bash[23387]: cluster 2026-03-09T21:22:11.787160+0000 mgr.y (mgr.24416) 234 : cluster [DBG] pgmap v392: 164 pgs: 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:12.473 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:12 vm10 bash[23387]: cluster 2026-03-09T21:22:11.787160+0000 mgr.y (mgr.24416) 234 : cluster [DBG] pgmap v392: 164 pgs: 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:12.473 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:12 vm10 bash[23387]: audit 2026-03-09T21:22:12.033916+0000 mon.c (mon.2) 107 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:22:12.473 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:12 vm10 bash[23387]: audit 2026-03-09T21:22:12.033916+0000 mon.c (mon.2) 107 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:22:13.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:13 vm07 bash[20771]: cluster 2026-03-09T21:22:12.081676+0000 mon.a (mon.0) 1181 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-09T21:22:13.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:13 vm07 bash[20771]: cluster 2026-03-09T21:22:12.081676+0000 mon.a (mon.0) 1181 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-09T21:22:13.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:13 vm07 bash[28052]: cluster 2026-03-09T21:22:12.081676+0000 mon.a (mon.0) 1181 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-09T21:22:13.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:13 vm07 bash[28052]: cluster 2026-03-09T21:22:12.081676+0000 mon.a (mon.0) 1181 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-09T21:22:13.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:13 vm10 bash[23387]: cluster 2026-03-09T21:22:12.081676+0000 mon.a (mon.0) 1181 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-09T21:22:13.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:13 vm10 bash[23387]: cluster 2026-03-09T21:22:12.081676+0000 mon.a (mon.0) 1181 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-09T21:22:14.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:14 vm07 bash[20771]: cluster 2026-03-09T21:22:13.078926+0000 mon.a (mon.0) 1182 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-09T21:22:14.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:14 vm07 bash[20771]: cluster 2026-03-09T21:22:13.078926+0000 mon.a (mon.0) 1182 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-09T21:22:14.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:14 vm07 bash[20771]: audit 2026-03-09T21:22:13.123969+0000 mon.c (mon.2) 108 : audit [INF] from='client.? 192.168.123.107:0/69759478' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:14.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:14 vm07 bash[20771]: audit 2026-03-09T21:22:13.123969+0000 mon.c (mon.2) 108 : audit [INF] from='client.? 192.168.123.107:0/69759478' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:14.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:14 vm07 bash[20771]: audit 2026-03-09T21:22:13.124551+0000 mon.a (mon.0) 1183 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:14.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:14 vm07 bash[20771]: audit 2026-03-09T21:22:13.124551+0000 mon.a (mon.0) 1183 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:14.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:14 vm07 bash[20771]: cluster 2026-03-09T21:22:13.787533+0000 mgr.y (mgr.24416) 235 : cluster [DBG] pgmap v395: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:14.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:14 vm07 bash[20771]: cluster 2026-03-09T21:22:13.787533+0000 mgr.y (mgr.24416) 235 : cluster [DBG] pgmap v395: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:14.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:14 vm07 bash[28052]: cluster 2026-03-09T21:22:13.078926+0000 mon.a (mon.0) 1182 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-09T21:22:14.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:14 vm07 bash[28052]: cluster 2026-03-09T21:22:13.078926+0000 mon.a (mon.0) 1182 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-09T21:22:14.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:14 vm07 bash[28052]: audit 2026-03-09T21:22:13.123969+0000 mon.c (mon.2) 108 : audit [INF] from='client.? 192.168.123.107:0/69759478' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:14.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:14 vm07 bash[28052]: audit 2026-03-09T21:22:13.123969+0000 mon.c (mon.2) 108 : audit [INF] from='client.? 192.168.123.107:0/69759478' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:14.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:14 vm07 bash[28052]: audit 2026-03-09T21:22:13.124551+0000 mon.a (mon.0) 1183 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:14.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:14 vm07 bash[28052]: audit 2026-03-09T21:22:13.124551+0000 mon.a (mon.0) 1183 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:14.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:14 vm07 bash[28052]: cluster 2026-03-09T21:22:13.787533+0000 mgr.y (mgr.24416) 235 : cluster [DBG] pgmap v395: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:14.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:14 vm07 bash[28052]: cluster 2026-03-09T21:22:13.787533+0000 mgr.y (mgr.24416) 235 : cluster [DBG] pgmap v395: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:14.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:14 vm10 bash[23387]: cluster 2026-03-09T21:22:13.078926+0000 mon.a (mon.0) 1182 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-09T21:22:14.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:14 vm10 bash[23387]: cluster 2026-03-09T21:22:13.078926+0000 mon.a (mon.0) 1182 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-09T21:22:14.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:14 vm10 bash[23387]: audit 2026-03-09T21:22:13.123969+0000 mon.c (mon.2) 108 : audit [INF] from='client.? 192.168.123.107:0/69759478' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:14.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:14 vm10 bash[23387]: audit 2026-03-09T21:22:13.123969+0000 mon.c (mon.2) 108 : audit [INF] from='client.? 192.168.123.107:0/69759478' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:14.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:14 vm10 bash[23387]: audit 2026-03-09T21:22:13.124551+0000 mon.a (mon.0) 1183 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:14.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:14 vm10 bash[23387]: audit 2026-03-09T21:22:13.124551+0000 mon.a (mon.0) 1183 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:14.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:14 vm10 bash[23387]: cluster 2026-03-09T21:22:13.787533+0000 mgr.y (mgr.24416) 235 : cluster [DBG] pgmap v395: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:14.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:14 vm10 bash[23387]: cluster 2026-03-09T21:22:13.787533+0000 mgr.y (mgr.24416) 235 : cluster [DBG] pgmap v395: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:15.123 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_remove PASSED [ 75%] 2026-03-09T21:22:15.364 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:15 vm07 bash[20771]: cluster 2026-03-09T21:22:14.066303+0000 mon.a (mon.0) 1184 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:22:15.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:15 vm07 bash[20771]: cluster 2026-03-09T21:22:14.066303+0000 mon.a (mon.0) 1184 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:22:15.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:15 vm07 bash[20771]: audit 2026-03-09T21:22:14.073626+0000 mon.a (mon.0) 1185 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:15.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:15 vm07 bash[20771]: audit 2026-03-09T21:22:14.073626+0000 mon.a (mon.0) 1185 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:15.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:15 vm07 bash[20771]: cluster 2026-03-09T21:22:14.083371+0000 mon.a (mon.0) 1186 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-09T21:22:15.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:15 vm07 bash[20771]: cluster 2026-03-09T21:22:14.083371+0000 mon.a (mon.0) 1186 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-09T21:22:15.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:15 vm07 bash[28052]: cluster 2026-03-09T21:22:14.066303+0000 mon.a (mon.0) 1184 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:22:15.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:15 vm07 bash[28052]: cluster 2026-03-09T21:22:14.066303+0000 mon.a (mon.0) 1184 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:22:15.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:15 vm07 bash[28052]: audit 2026-03-09T21:22:14.073626+0000 mon.a (mon.0) 1185 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:15.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:15 vm07 bash[28052]: audit 2026-03-09T21:22:14.073626+0000 mon.a (mon.0) 1185 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:15.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:15 vm07 bash[28052]: cluster 2026-03-09T21:22:14.083371+0000 mon.a (mon.0) 1186 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-09T21:22:15.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:15 vm07 bash[28052]: cluster 2026-03-09T21:22:14.083371+0000 mon.a (mon.0) 1186 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-09T21:22:15.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:15 vm10 bash[23387]: cluster 2026-03-09T21:22:14.066303+0000 mon.a (mon.0) 1184 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:22:15.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:15 vm10 bash[23387]: cluster 2026-03-09T21:22:14.066303+0000 mon.a (mon.0) 1184 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:22:15.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:15 vm10 bash[23387]: audit 2026-03-09T21:22:14.073626+0000 mon.a (mon.0) 1185 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:15.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:15 vm10 bash[23387]: audit 2026-03-09T21:22:14.073626+0000 mon.a (mon.0) 1185 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:15.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:15 vm10 bash[23387]: cluster 2026-03-09T21:22:14.083371+0000 mon.a (mon.0) 1186 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-09T21:22:15.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:15 vm10 bash[23387]: cluster 2026-03-09T21:22:14.083371+0000 mon.a (mon.0) 1186 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-09T21:22:16.435 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:16 vm10 bash[23387]: cluster 2026-03-09T21:22:15.114097+0000 mon.a (mon.0) 1187 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-09T21:22:16.435 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:16 vm10 bash[23387]: cluster 2026-03-09T21:22:15.114097+0000 mon.a (mon.0) 1187 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-09T21:22:16.435 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:16 vm10 bash[23387]: cluster 2026-03-09T21:22:15.787808+0000 mgr.y (mgr.24416) 236 : cluster [DBG] pgmap v398: 164 pgs: 164 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:16.435 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:16 vm10 bash[23387]: cluster 2026-03-09T21:22:15.787808+0000 mgr.y (mgr.24416) 236 : cluster [DBG] pgmap v398: 164 pgs: 164 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:16 vm07 bash[20771]: cluster 2026-03-09T21:22:15.114097+0000 mon.a (mon.0) 1187 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-09T21:22:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:16 vm07 bash[20771]: cluster 2026-03-09T21:22:15.114097+0000 mon.a (mon.0) 1187 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-09T21:22:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:16 vm07 bash[20771]: cluster 2026-03-09T21:22:15.787808+0000 mgr.y (mgr.24416) 236 : cluster [DBG] pgmap v398: 164 pgs: 164 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:16 vm07 bash[20771]: cluster 2026-03-09T21:22:15.787808+0000 mgr.y (mgr.24416) 236 : cluster [DBG] pgmap v398: 164 pgs: 164 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:16 vm07 bash[28052]: cluster 2026-03-09T21:22:15.114097+0000 mon.a (mon.0) 1187 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-09T21:22:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:16 vm07 bash[28052]: cluster 2026-03-09T21:22:15.114097+0000 mon.a (mon.0) 1187 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-09T21:22:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:16 vm07 bash[28052]: cluster 2026-03-09T21:22:15.787808+0000 mgr.y (mgr.24416) 236 : cluster [DBG] pgmap v398: 164 pgs: 164 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:16 vm07 bash[28052]: cluster 2026-03-09T21:22:15.787808+0000 mgr.y (mgr.24416) 236 : cluster [DBG] pgmap v398: 164 pgs: 164 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:16.692 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:22:16 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:22:17.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:17 vm10 bash[23387]: cluster 2026-03-09T21:22:16.137660+0000 mon.a (mon.0) 1188 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-09T21:22:17.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:17 vm10 bash[23387]: cluster 2026-03-09T21:22:16.137660+0000 mon.a (mon.0) 1188 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-09T21:22:17.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:17 vm10 bash[23387]: audit 2026-03-09T21:22:16.435329+0000 mgr.y (mgr.24416) 237 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:17.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:17 vm10 bash[23387]: audit 2026-03-09T21:22:16.435329+0000 mgr.y (mgr.24416) 237 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:17.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:17 vm07 bash[20771]: cluster 2026-03-09T21:22:16.137660+0000 mon.a (mon.0) 1188 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-09T21:22:17.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:17 vm07 bash[20771]: cluster 2026-03-09T21:22:16.137660+0000 mon.a (mon.0) 1188 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-09T21:22:17.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:17 vm07 bash[20771]: audit 2026-03-09T21:22:16.435329+0000 mgr.y (mgr.24416) 237 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:17.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:17 vm07 bash[20771]: audit 2026-03-09T21:22:16.435329+0000 mgr.y (mgr.24416) 237 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:17.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:17 vm07 bash[28052]: cluster 2026-03-09T21:22:16.137660+0000 mon.a (mon.0) 1188 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-09T21:22:17.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:17 vm07 bash[28052]: cluster 2026-03-09T21:22:16.137660+0000 mon.a (mon.0) 1188 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-09T21:22:17.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:17 vm07 bash[28052]: audit 2026-03-09T21:22:16.435329+0000 mgr.y (mgr.24416) 237 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:17.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:17 vm07 bash[28052]: audit 2026-03-09T21:22:16.435329+0000 mgr.y (mgr.24416) 237 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:18.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:18 vm10 bash[23387]: cluster 2026-03-09T21:22:17.140666+0000 mon.a (mon.0) 1189 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-09T21:22:18.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:18 vm10 bash[23387]: cluster 2026-03-09T21:22:17.140666+0000 mon.a (mon.0) 1189 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-09T21:22:18.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:18 vm10 bash[23387]: audit 2026-03-09T21:22:17.169968+0000 mon.a (mon.0) 1190 : audit [DBG] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-09T21:22:18.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:18 vm10 bash[23387]: audit 2026-03-09T21:22:17.169968+0000 mon.a (mon.0) 1190 : audit [DBG] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-09T21:22:18.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:18 vm10 bash[23387]: audit 2026-03-09T21:22:17.170638+0000 mon.a (mon.0) 1191 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T21:22:18.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:18 vm10 bash[23387]: audit 2026-03-09T21:22:17.170638+0000 mon.a (mon.0) 1191 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T21:22:18.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:18 vm10 bash[23387]: cluster 2026-03-09T21:22:17.788309+0000 mgr.y (mgr.24416) 238 : cluster [DBG] pgmap v401: 196 pgs: 196 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:22:18.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:18 vm10 bash[23387]: cluster 2026-03-09T21:22:17.788309+0000 mgr.y (mgr.24416) 238 : cluster [DBG] pgmap v401: 196 pgs: 196 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:22:18.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:18 vm10 bash[23387]: cluster 2026-03-09T21:22:18.125137+0000 mon.a (mon.0) 1192 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T21:22:18.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:18 vm10 bash[23387]: cluster 2026-03-09T21:22:18.125137+0000 mon.a (mon.0) 1192 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T21:22:18.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:18 vm10 bash[23387]: audit 2026-03-09T21:22:18.128014+0000 mon.a (mon.0) 1193 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T21:22:18.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:18 vm10 bash[23387]: audit 2026-03-09T21:22:18.128014+0000 mon.a (mon.0) 1193 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T21:22:18.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:18 vm10 bash[23387]: cluster 2026-03-09T21:22:18.138027+0000 mon.a (mon.0) 1194 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-09T21:22:18.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:18 vm10 bash[23387]: cluster 2026-03-09T21:22:18.138027+0000 mon.a (mon.0) 1194 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-09T21:22:18.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:18 vm10 bash[23387]: audit 2026-03-09T21:22:18.140396+0000 mon.a (mon.0) 1195 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["4", "0", "7"]}]: dispatch 2026-03-09T21:22:18.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:18 vm10 bash[23387]: audit 2026-03-09T21:22:18.140396+0000 mon.a (mon.0) 1195 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["4", "0", "7"]}]: dispatch 2026-03-09T21:22:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:18 vm07 bash[20771]: debug 2026-03-09T21:22:18.137+0000 7efed56c7640 -1 mon.a@0(leader).osd e297 definitely_dead 0 2026-03-09T21:22:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:18 vm07 bash[20771]: cluster 2026-03-09T21:22:17.140666+0000 mon.a (mon.0) 1189 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-09T21:22:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:18 vm07 bash[20771]: cluster 2026-03-09T21:22:17.140666+0000 mon.a (mon.0) 1189 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-09T21:22:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:18 vm07 bash[20771]: audit 2026-03-09T21:22:17.169968+0000 mon.a (mon.0) 1190 : audit [DBG] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-09T21:22:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:18 vm07 bash[20771]: audit 2026-03-09T21:22:17.169968+0000 mon.a (mon.0) 1190 : audit [DBG] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-09T21:22:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:18 vm07 bash[20771]: audit 2026-03-09T21:22:17.170638+0000 mon.a (mon.0) 1191 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T21:22:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:18 vm07 bash[20771]: audit 2026-03-09T21:22:17.170638+0000 mon.a (mon.0) 1191 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T21:22:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:18 vm07 bash[20771]: cluster 2026-03-09T21:22:17.788309+0000 mgr.y (mgr.24416) 238 : cluster [DBG] pgmap v401: 196 pgs: 196 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:22:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:18 vm07 bash[20771]: cluster 2026-03-09T21:22:17.788309+0000 mgr.y (mgr.24416) 238 : cluster [DBG] pgmap v401: 196 pgs: 196 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:22:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:18 vm07 bash[20771]: cluster 2026-03-09T21:22:18.125137+0000 mon.a (mon.0) 1192 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T21:22:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:18 vm07 bash[20771]: cluster 2026-03-09T21:22:18.125137+0000 mon.a (mon.0) 1192 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T21:22:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:18 vm07 bash[20771]: audit 2026-03-09T21:22:18.128014+0000 mon.a (mon.0) 1193 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T21:22:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:18 vm07 bash[20771]: audit 2026-03-09T21:22:18.128014+0000 mon.a (mon.0) 1193 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T21:22:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:18 vm07 bash[20771]: cluster 2026-03-09T21:22:18.138027+0000 mon.a (mon.0) 1194 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-09T21:22:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:18 vm07 bash[20771]: cluster 2026-03-09T21:22:18.138027+0000 mon.a (mon.0) 1194 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-09T21:22:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:18 vm07 bash[20771]: audit 2026-03-09T21:22:18.140396+0000 mon.a (mon.0) 1195 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["4", "0", "7"]}]: dispatch 2026-03-09T21:22:18.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:18 vm07 bash[20771]: audit 2026-03-09T21:22:18.140396+0000 mon.a (mon.0) 1195 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["4", "0", "7"]}]: dispatch 2026-03-09T21:22:18.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:18 vm07 bash[28052]: cluster 2026-03-09T21:22:17.140666+0000 mon.a (mon.0) 1189 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-09T21:22:18.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:18 vm07 bash[28052]: cluster 2026-03-09T21:22:17.140666+0000 mon.a (mon.0) 1189 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-09T21:22:18.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:18 vm07 bash[28052]: audit 2026-03-09T21:22:17.169968+0000 mon.a (mon.0) 1190 : audit [DBG] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-09T21:22:18.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:18 vm07 bash[28052]: audit 2026-03-09T21:22:17.169968+0000 mon.a (mon.0) 1190 : audit [DBG] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-09T21:22:18.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:18 vm07 bash[28052]: audit 2026-03-09T21:22:17.170638+0000 mon.a (mon.0) 1191 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T21:22:18.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:18 vm07 bash[28052]: audit 2026-03-09T21:22:17.170638+0000 mon.a (mon.0) 1191 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T21:22:18.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:18 vm07 bash[28052]: cluster 2026-03-09T21:22:17.788309+0000 mgr.y (mgr.24416) 238 : cluster [DBG] pgmap v401: 196 pgs: 196 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:22:18.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:18 vm07 bash[28052]: cluster 2026-03-09T21:22:17.788309+0000 mgr.y (mgr.24416) 238 : cluster [DBG] pgmap v401: 196 pgs: 196 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:22:18.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:18 vm07 bash[28052]: cluster 2026-03-09T21:22:18.125137+0000 mon.a (mon.0) 1192 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T21:22:18.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:18 vm07 bash[28052]: cluster 2026-03-09T21:22:18.125137+0000 mon.a (mon.0) 1192 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T21:22:18.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:18 vm07 bash[28052]: audit 2026-03-09T21:22:18.128014+0000 mon.a (mon.0) 1193 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T21:22:18.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:18 vm07 bash[28052]: audit 2026-03-09T21:22:18.128014+0000 mon.a (mon.0) 1193 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T21:22:18.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:18 vm07 bash[28052]: cluster 2026-03-09T21:22:18.138027+0000 mon.a (mon.0) 1194 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-09T21:22:18.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:18 vm07 bash[28052]: cluster 2026-03-09T21:22:18.138027+0000 mon.a (mon.0) 1194 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-09T21:22:18.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:18 vm07 bash[28052]: audit 2026-03-09T21:22:18.140396+0000 mon.a (mon.0) 1195 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["4", "0", "7"]}]: dispatch 2026-03-09T21:22:18.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:18 vm07 bash[28052]: audit 2026-03-09T21:22:18.140396+0000 mon.a (mon.0) 1195 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["4", "0", "7"]}]: dispatch 2026-03-09T21:22:19.115 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:22:18 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:22:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:22:19.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:19 vm10 bash[23387]: cluster 2026-03-09T21:22:19.128610+0000 mon.a (mon.0) 1196 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T21:22:19.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:19 vm10 bash[23387]: cluster 2026-03-09T21:22:19.128610+0000 mon.a (mon.0) 1196 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T21:22:19.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:19 vm10 bash[23387]: audit 2026-03-09T21:22:19.131374+0000 mon.a (mon.0) 1197 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["4", "0", "7"]}]': finished 2026-03-09T21:22:19.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:19 vm10 bash[23387]: audit 2026-03-09T21:22:19.131374+0000 mon.a (mon.0) 1197 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["4", "0", "7"]}]': finished 2026-03-09T21:22:19.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:19 vm10 bash[23387]: cluster 2026-03-09T21:22:19.140300+0000 mon.a (mon.0) 1198 : cluster [DBG] osdmap e298: 8 total, 5 up, 8 in 2026-03-09T21:22:19.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:19 vm10 bash[23387]: cluster 2026-03-09T21:22:19.140300+0000 mon.a (mon.0) 1198 : cluster [DBG] osdmap e298: 8 total, 5 up, 8 in 2026-03-09T21:22:19.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:19 vm07 bash[20771]: cluster 2026-03-09T21:22:19.128610+0000 mon.a (mon.0) 1196 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T21:22:19.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:19 vm07 bash[20771]: cluster 2026-03-09T21:22:19.128610+0000 mon.a (mon.0) 1196 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T21:22:19.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:19 vm07 bash[20771]: audit 2026-03-09T21:22:19.131374+0000 mon.a (mon.0) 1197 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["4", "0", "7"]}]': finished 2026-03-09T21:22:19.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:19 vm07 bash[20771]: audit 2026-03-09T21:22:19.131374+0000 mon.a (mon.0) 1197 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["4", "0", "7"]}]': finished 2026-03-09T21:22:19.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:19 vm07 bash[20771]: cluster 2026-03-09T21:22:19.140300+0000 mon.a (mon.0) 1198 : cluster [DBG] osdmap e298: 8 total, 5 up, 8 in 2026-03-09T21:22:19.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:19 vm07 bash[20771]: cluster 2026-03-09T21:22:19.140300+0000 mon.a (mon.0) 1198 : cluster [DBG] osdmap e298: 8 total, 5 up, 8 in 2026-03-09T21:22:19.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:19 vm07 bash[28052]: cluster 2026-03-09T21:22:19.128610+0000 mon.a (mon.0) 1196 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T21:22:19.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:19 vm07 bash[28052]: cluster 2026-03-09T21:22:19.128610+0000 mon.a (mon.0) 1196 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T21:22:19.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:19 vm07 bash[28052]: audit 2026-03-09T21:22:19.131374+0000 mon.a (mon.0) 1197 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["4", "0", "7"]}]': finished 2026-03-09T21:22:19.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:19 vm07 bash[28052]: audit 2026-03-09T21:22:19.131374+0000 mon.a (mon.0) 1197 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["4", "0", "7"]}]': finished 2026-03-09T21:22:19.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:19 vm07 bash[28052]: cluster 2026-03-09T21:22:19.140300+0000 mon.a (mon.0) 1198 : cluster [DBG] osdmap e298: 8 total, 5 up, 8 in 2026-03-09T21:22:19.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:19 vm07 bash[28052]: cluster 2026-03-09T21:22:19.140300+0000 mon.a (mon.0) 1198 : cluster [DBG] osdmap e298: 8 total, 5 up, 8 in 2026-03-09T21:22:20.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:20 vm07 bash[20771]: cluster 2026-03-09T21:22:19.736653+0000 osd.0 (osd.0) 3 : cluster [WRN] Monitor daemon marked osd.0 down, but it is still running 2026-03-09T21:22:20.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:20 vm07 bash[20771]: cluster 2026-03-09T21:22:19.736653+0000 osd.0 (osd.0) 3 : cluster [WRN] Monitor daemon marked osd.0 down, but it is still running 2026-03-09T21:22:20.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:20 vm07 bash[20771]: cluster 2026-03-09T21:22:19.736655+0000 osd.0 (osd.0) 4 : cluster [DBG] map e298 wrongly marked me down at e298 2026-03-09T21:22:20.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:20 vm07 bash[20771]: cluster 2026-03-09T21:22:19.736655+0000 osd.0 (osd.0) 4 : cluster [DBG] map e298 wrongly marked me down at e298 2026-03-09T21:22:20.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:20 vm07 bash[20771]: cluster 2026-03-09T21:22:19.737211+0000 mon.a (mon.0) 1199 : cluster [INF] osd.0 marked itself dead as of e298 2026-03-09T21:22:20.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:20 vm07 bash[20771]: cluster 2026-03-09T21:22:19.737211+0000 mon.a (mon.0) 1199 : cluster [INF] osd.0 marked itself dead as of e298 2026-03-09T21:22:20.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:20 vm07 bash[20771]: cluster 2026-03-09T21:22:19.788693+0000 mgr.y (mgr.24416) 239 : cluster [DBG] pgmap v404: 196 pgs: 79 stale+active+clean, 117 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:22:20.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:20 vm07 bash[20771]: cluster 2026-03-09T21:22:19.788693+0000 mgr.y (mgr.24416) 239 : cluster [DBG] pgmap v404: 196 pgs: 79 stale+active+clean, 117 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:22:20.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:20 vm07 bash[20771]: cluster 2026-03-09T21:22:20.138580+0000 mon.a (mon.0) 1200 : cluster [DBG] osdmap e299: 8 total, 5 up, 8 in 2026-03-09T21:22:20.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:20 vm07 bash[20771]: cluster 2026-03-09T21:22:20.138580+0000 mon.a (mon.0) 1200 : cluster [DBG] osdmap e299: 8 total, 5 up, 8 in 2026-03-09T21:22:20.365 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:22:20 vm07 bash[30944]: debug 2026-03-09T21:22:20.069+0000 7f229ee1b640 -1 osd.0 298 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T21:22:20.365 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:22:20 vm07 bash[30944]: debug 2026-03-09T21:22:20.157+0000 7f2292a05640 -1 osd.0 299 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T21:22:20.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:20 vm07 bash[28052]: cluster 2026-03-09T21:22:19.736653+0000 osd.0 (osd.0) 3 : cluster [WRN] Monitor daemon marked osd.0 down, but it is still running 2026-03-09T21:22:20.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:20 vm07 bash[28052]: cluster 2026-03-09T21:22:19.736653+0000 osd.0 (osd.0) 3 : cluster [WRN] Monitor daemon marked osd.0 down, but it is still running 2026-03-09T21:22:20.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:20 vm07 bash[28052]: cluster 2026-03-09T21:22:19.736655+0000 osd.0 (osd.0) 4 : cluster [DBG] map e298 wrongly marked me down at e298 2026-03-09T21:22:20.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:20 vm07 bash[28052]: cluster 2026-03-09T21:22:19.736655+0000 osd.0 (osd.0) 4 : cluster [DBG] map e298 wrongly marked me down at e298 2026-03-09T21:22:20.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:20 vm07 bash[28052]: cluster 2026-03-09T21:22:19.737211+0000 mon.a (mon.0) 1199 : cluster [INF] osd.0 marked itself dead as of e298 2026-03-09T21:22:20.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:20 vm07 bash[28052]: cluster 2026-03-09T21:22:19.737211+0000 mon.a (mon.0) 1199 : cluster [INF] osd.0 marked itself dead as of e298 2026-03-09T21:22:20.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:20 vm07 bash[28052]: cluster 2026-03-09T21:22:19.788693+0000 mgr.y (mgr.24416) 239 : cluster [DBG] pgmap v404: 196 pgs: 79 stale+active+clean, 117 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:22:20.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:20 vm07 bash[28052]: cluster 2026-03-09T21:22:19.788693+0000 mgr.y (mgr.24416) 239 : cluster [DBG] pgmap v404: 196 pgs: 79 stale+active+clean, 117 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:22:20.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:20 vm07 bash[28052]: cluster 2026-03-09T21:22:20.138580+0000 mon.a (mon.0) 1200 : cluster [DBG] osdmap e299: 8 total, 5 up, 8 in 2026-03-09T21:22:20.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:20 vm07 bash[28052]: cluster 2026-03-09T21:22:20.138580+0000 mon.a (mon.0) 1200 : cluster [DBG] osdmap e299: 8 total, 5 up, 8 in 2026-03-09T21:22:20.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:20 vm10 bash[23387]: cluster 2026-03-09T21:22:19.736653+0000 osd.0 (osd.0) 3 : cluster [WRN] Monitor daemon marked osd.0 down, but it is still running 2026-03-09T21:22:20.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:20 vm10 bash[23387]: cluster 2026-03-09T21:22:19.736653+0000 osd.0 (osd.0) 3 : cluster [WRN] Monitor daemon marked osd.0 down, but it is still running 2026-03-09T21:22:20.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:20 vm10 bash[23387]: cluster 2026-03-09T21:22:19.736655+0000 osd.0 (osd.0) 4 : cluster [DBG] map e298 wrongly marked me down at e298 2026-03-09T21:22:20.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:20 vm10 bash[23387]: cluster 2026-03-09T21:22:19.736655+0000 osd.0 (osd.0) 4 : cluster [DBG] map e298 wrongly marked me down at e298 2026-03-09T21:22:20.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:20 vm10 bash[23387]: cluster 2026-03-09T21:22:19.737211+0000 mon.a (mon.0) 1199 : cluster [INF] osd.0 marked itself dead as of e298 2026-03-09T21:22:20.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:20 vm10 bash[23387]: cluster 2026-03-09T21:22:19.737211+0000 mon.a (mon.0) 1199 : cluster [INF] osd.0 marked itself dead as of e298 2026-03-09T21:22:20.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:20 vm10 bash[23387]: cluster 2026-03-09T21:22:19.788693+0000 mgr.y (mgr.24416) 239 : cluster [DBG] pgmap v404: 196 pgs: 79 stale+active+clean, 117 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:22:20.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:20 vm10 bash[23387]: cluster 2026-03-09T21:22:19.788693+0000 mgr.y (mgr.24416) 239 : cluster [DBG] pgmap v404: 196 pgs: 79 stale+active+clean, 117 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T21:22:20.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:20 vm10 bash[23387]: cluster 2026-03-09T21:22:20.138580+0000 mon.a (mon.0) 1200 : cluster [DBG] osdmap e299: 8 total, 5 up, 8 in 2026-03-09T21:22:20.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:20 vm10 bash[23387]: cluster 2026-03-09T21:22:20.138580+0000 mon.a (mon.0) 1200 : cluster [DBG] osdmap e299: 8 total, 5 up, 8 in 2026-03-09T21:22:20.442 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:22:20 vm10 bash[26618]: debug 2026-03-09T21:22:20.305+0000 7fe970fad640 -1 osd.4 299 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T21:22:21.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:21 vm10 bash[23387]: cluster 2026-03-09T21:22:20.242376+0000 osd.4 (osd.4) 3 : cluster [WRN] Monitor daemon marked osd.4 down, but it is still running 2026-03-09T21:22:21.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:21 vm10 bash[23387]: cluster 2026-03-09T21:22:20.242376+0000 osd.4 (osd.4) 3 : cluster [WRN] Monitor daemon marked osd.4 down, but it is still running 2026-03-09T21:22:21.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:21 vm10 bash[23387]: cluster 2026-03-09T21:22:20.242378+0000 osd.4 (osd.4) 4 : cluster [DBG] map e299 wrongly marked me down at e298 2026-03-09T21:22:21.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:21 vm10 bash[23387]: cluster 2026-03-09T21:22:20.242378+0000 osd.4 (osd.4) 4 : cluster [DBG] map e299 wrongly marked me down at e298 2026-03-09T21:22:21.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:21 vm10 bash[23387]: cluster 2026-03-09T21:22:20.243333+0000 mon.a (mon.0) 1201 : cluster [INF] osd.4 marked itself dead as of e299 2026-03-09T21:22:21.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:21 vm10 bash[23387]: cluster 2026-03-09T21:22:20.243333+0000 mon.a (mon.0) 1201 : cluster [INF] osd.4 marked itself dead as of e299 2026-03-09T21:22:21.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:21 vm10 bash[23387]: cluster 2026-03-09T21:22:20.261487+0000 osd.7 (osd.7) 3 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-09T21:22:21.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:21 vm10 bash[23387]: cluster 2026-03-09T21:22:20.261487+0000 osd.7 (osd.7) 3 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-09T21:22:21.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:21 vm10 bash[23387]: cluster 2026-03-09T21:22:20.261488+0000 osd.7 (osd.7) 4 : cluster [DBG] map e299 wrongly marked me down at e298 2026-03-09T21:22:21.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:21 vm10 bash[23387]: cluster 2026-03-09T21:22:20.261488+0000 osd.7 (osd.7) 4 : cluster [DBG] map e299 wrongly marked me down at e298 2026-03-09T21:22:21.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:21 vm10 bash[23387]: cluster 2026-03-09T21:22:20.263874+0000 mon.a (mon.0) 1202 : cluster [INF] osd.7 marked itself dead as of e299 2026-03-09T21:22:21.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:21 vm10 bash[23387]: cluster 2026-03-09T21:22:20.263874+0000 mon.a (mon.0) 1202 : cluster [INF] osd.7 marked itself dead as of e299 2026-03-09T21:22:21.442 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:22:21 vm10 bash[26618]: debug 2026-03-09T21:22:21.197+0000 7fe964b97640 -1 osd.4 300 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T21:22:21.442 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:22:21 vm10 bash[44771]: debug 2026-03-09T21:22:21.093+0000 7fa1ebadc640 -1 osd.7 299 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T21:22:21.442 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:22:21 vm10 bash[44771]: debug 2026-03-09T21:22:21.197+0000 7fa1de6b2640 -1 osd.7 300 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T21:22:21.614 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:22:21 vm07 bash[30944]: debug 2026-03-09T21:22:21.197+0000 7f2292a05640 -1 osd.0 300 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T21:22:21.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:21 vm07 bash[20771]: cluster 2026-03-09T21:22:20.242376+0000 osd.4 (osd.4) 3 : cluster [WRN] Monitor daemon marked osd.4 down, but it is still running 2026-03-09T21:22:21.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:21 vm07 bash[20771]: cluster 2026-03-09T21:22:20.242376+0000 osd.4 (osd.4) 3 : cluster [WRN] Monitor daemon marked osd.4 down, but it is still running 2026-03-09T21:22:21.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:21 vm07 bash[20771]: cluster 2026-03-09T21:22:20.242378+0000 osd.4 (osd.4) 4 : cluster [DBG] map e299 wrongly marked me down at e298 2026-03-09T21:22:21.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:21 vm07 bash[20771]: cluster 2026-03-09T21:22:20.242378+0000 osd.4 (osd.4) 4 : cluster [DBG] map e299 wrongly marked me down at e298 2026-03-09T21:22:21.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:21 vm07 bash[20771]: cluster 2026-03-09T21:22:20.243333+0000 mon.a (mon.0) 1201 : cluster [INF] osd.4 marked itself dead as of e299 2026-03-09T21:22:21.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:21 vm07 bash[20771]: cluster 2026-03-09T21:22:20.243333+0000 mon.a (mon.0) 1201 : cluster [INF] osd.4 marked itself dead as of e299 2026-03-09T21:22:21.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:21 vm07 bash[20771]: cluster 2026-03-09T21:22:20.261487+0000 osd.7 (osd.7) 3 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-09T21:22:21.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:21 vm07 bash[20771]: cluster 2026-03-09T21:22:20.261487+0000 osd.7 (osd.7) 3 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-09T21:22:21.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:21 vm07 bash[20771]: cluster 2026-03-09T21:22:20.261488+0000 osd.7 (osd.7) 4 : cluster [DBG] map e299 wrongly marked me down at e298 2026-03-09T21:22:21.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:21 vm07 bash[20771]: cluster 2026-03-09T21:22:20.261488+0000 osd.7 (osd.7) 4 : cluster [DBG] map e299 wrongly marked me down at e298 2026-03-09T21:22:21.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:21 vm07 bash[20771]: cluster 2026-03-09T21:22:20.263874+0000 mon.a (mon.0) 1202 : cluster [INF] osd.7 marked itself dead as of e299 2026-03-09T21:22:21.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:21 vm07 bash[20771]: cluster 2026-03-09T21:22:20.263874+0000 mon.a (mon.0) 1202 : cluster [INF] osd.7 marked itself dead as of e299 2026-03-09T21:22:21.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:21 vm07 bash[28052]: cluster 2026-03-09T21:22:20.242376+0000 osd.4 (osd.4) 3 : cluster [WRN] Monitor daemon marked osd.4 down, but it is still running 2026-03-09T21:22:21.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:21 vm07 bash[28052]: cluster 2026-03-09T21:22:20.242376+0000 osd.4 (osd.4) 3 : cluster [WRN] Monitor daemon marked osd.4 down, but it is still running 2026-03-09T21:22:21.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:21 vm07 bash[28052]: cluster 2026-03-09T21:22:20.242378+0000 osd.4 (osd.4) 4 : cluster [DBG] map e299 wrongly marked me down at e298 2026-03-09T21:22:21.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:21 vm07 bash[28052]: cluster 2026-03-09T21:22:20.242378+0000 osd.4 (osd.4) 4 : cluster [DBG] map e299 wrongly marked me down at e298 2026-03-09T21:22:21.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:21 vm07 bash[28052]: cluster 2026-03-09T21:22:20.243333+0000 mon.a (mon.0) 1201 : cluster [INF] osd.4 marked itself dead as of e299 2026-03-09T21:22:21.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:21 vm07 bash[28052]: cluster 2026-03-09T21:22:20.243333+0000 mon.a (mon.0) 1201 : cluster [INF] osd.4 marked itself dead as of e299 2026-03-09T21:22:21.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:21 vm07 bash[28052]: cluster 2026-03-09T21:22:20.261487+0000 osd.7 (osd.7) 3 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-09T21:22:21.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:21 vm07 bash[28052]: cluster 2026-03-09T21:22:20.261487+0000 osd.7 (osd.7) 3 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-09T21:22:21.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:21 vm07 bash[28052]: cluster 2026-03-09T21:22:20.261488+0000 osd.7 (osd.7) 4 : cluster [DBG] map e299 wrongly marked me down at e298 2026-03-09T21:22:21.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:21 vm07 bash[28052]: cluster 2026-03-09T21:22:20.261488+0000 osd.7 (osd.7) 4 : cluster [DBG] map e299 wrongly marked me down at e298 2026-03-09T21:22:21.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:21 vm07 bash[28052]: cluster 2026-03-09T21:22:20.263874+0000 mon.a (mon.0) 1202 : cluster [INF] osd.7 marked itself dead as of e299 2026-03-09T21:22:21.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:21 vm07 bash[28052]: cluster 2026-03-09T21:22:20.263874+0000 mon.a (mon.0) 1202 : cluster [INF] osd.7 marked itself dead as of e299 2026-03-09T21:22:22.364 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:22:22 vm07 bash[30944]: debug 2026-03-09T21:22:22.209+0000 7f229ac45640 -1 osd.0 301 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T21:22:22.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:22 vm07 bash[28052]: cluster 2026-03-09T21:22:21.201243+0000 mon.a (mon.0) 1203 : cluster [DBG] osdmap e300: 8 total, 5 up, 8 in 2026-03-09T21:22:22.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:22 vm07 bash[28052]: cluster 2026-03-09T21:22:21.201243+0000 mon.a (mon.0) 1203 : cluster [DBG] osdmap e300: 8 total, 5 up, 8 in 2026-03-09T21:22:22.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:22 vm07 bash[28052]: cluster 2026-03-09T21:22:21.789001+0000 mgr.y (mgr.24416) 240 : cluster [DBG] pgmap v407: 196 pgs: 79 stale+active+clean, 117 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:22.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:22 vm07 bash[28052]: cluster 2026-03-09T21:22:21.789001+0000 mgr.y (mgr.24416) 240 : cluster [DBG] pgmap v407: 196 pgs: 79 stale+active+clean, 117 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:22.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:22 vm07 bash[28052]: audit 2026-03-09T21:22:22.151015+0000 mon.a (mon.0) 1204 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:22.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:22 vm07 bash[28052]: audit 2026-03-09T21:22:22.151015+0000 mon.a (mon.0) 1204 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:22.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:22 vm07 bash[20771]: cluster 2026-03-09T21:22:21.201243+0000 mon.a (mon.0) 1203 : cluster [DBG] osdmap e300: 8 total, 5 up, 8 in 2026-03-09T21:22:22.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:22 vm07 bash[20771]: cluster 2026-03-09T21:22:21.201243+0000 mon.a (mon.0) 1203 : cluster [DBG] osdmap e300: 8 total, 5 up, 8 in 2026-03-09T21:22:22.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:22 vm07 bash[20771]: cluster 2026-03-09T21:22:21.789001+0000 mgr.y (mgr.24416) 240 : cluster [DBG] pgmap v407: 196 pgs: 79 stale+active+clean, 117 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:22.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:22 vm07 bash[20771]: cluster 2026-03-09T21:22:21.789001+0000 mgr.y (mgr.24416) 240 : cluster [DBG] pgmap v407: 196 pgs: 79 stale+active+clean, 117 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:22.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:22 vm07 bash[20771]: audit 2026-03-09T21:22:22.151015+0000 mon.a (mon.0) 1204 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:22.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:22 vm07 bash[20771]: audit 2026-03-09T21:22:22.151015+0000 mon.a (mon.0) 1204 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:22.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:22 vm10 bash[23387]: cluster 2026-03-09T21:22:21.201243+0000 mon.a (mon.0) 1203 : cluster [DBG] osdmap e300: 8 total, 5 up, 8 in 2026-03-09T21:22:22.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:22 vm10 bash[23387]: cluster 2026-03-09T21:22:21.201243+0000 mon.a (mon.0) 1203 : cluster [DBG] osdmap e300: 8 total, 5 up, 8 in 2026-03-09T21:22:22.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:22 vm10 bash[23387]: cluster 2026-03-09T21:22:21.789001+0000 mgr.y (mgr.24416) 240 : cluster [DBG] pgmap v407: 196 pgs: 79 stale+active+clean, 117 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:22.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:22 vm10 bash[23387]: cluster 2026-03-09T21:22:21.789001+0000 mgr.y (mgr.24416) 240 : cluster [DBG] pgmap v407: 196 pgs: 79 stale+active+clean, 117 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:22.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:22 vm10 bash[23387]: audit 2026-03-09T21:22:22.151015+0000 mon.a (mon.0) 1204 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:22.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:22 vm10 bash[23387]: audit 2026-03-09T21:22:22.151015+0000 mon.a (mon.0) 1204 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:22.692 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:22:22 vm10 bash[26618]: debug 2026-03-09T21:22:22.205+0000 7fe96cdd7640 -1 osd.4 301 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T21:22:22.692 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:22:22 vm10 bash[44771]: debug 2026-03-09T21:22:22.205+0000 7fa1e68f2640 -1 osd.7 301 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T21:22:23.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:23 vm07 bash[20771]: cluster 2026-03-09T21:22:22.192578+0000 mon.a (mon.0) 1205 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T21:22:23.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:23 vm07 bash[20771]: cluster 2026-03-09T21:22:22.192578+0000 mon.a (mon.0) 1205 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T21:22:23.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:23 vm07 bash[20771]: audit 2026-03-09T21:22:22.199395+0000 mon.a (mon.0) 1206 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:23.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:23 vm07 bash[20771]: audit 2026-03-09T21:22:22.199395+0000 mon.a (mon.0) 1206 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:23.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:23 vm07 bash[20771]: cluster 2026-03-09T21:22:22.218171+0000 mon.a (mon.0) 1207 : cluster [DBG] osdmap e301: 8 total, 5 up, 8 in 2026-03-09T21:22:23.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:23 vm07 bash[20771]: cluster 2026-03-09T21:22:22.218171+0000 mon.a (mon.0) 1207 : cluster [DBG] osdmap e301: 8 total, 5 up, 8 in 2026-03-09T21:22:23.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:23 vm07 bash[28052]: cluster 2026-03-09T21:22:22.192578+0000 mon.a (mon.0) 1205 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T21:22:23.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:23 vm07 bash[28052]: cluster 2026-03-09T21:22:22.192578+0000 mon.a (mon.0) 1205 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T21:22:23.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:23 vm07 bash[28052]: audit 2026-03-09T21:22:22.199395+0000 mon.a (mon.0) 1206 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:23.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:23 vm07 bash[28052]: audit 2026-03-09T21:22:22.199395+0000 mon.a (mon.0) 1206 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:23.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:23 vm07 bash[28052]: cluster 2026-03-09T21:22:22.218171+0000 mon.a (mon.0) 1207 : cluster [DBG] osdmap e301: 8 total, 5 up, 8 in 2026-03-09T21:22:23.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:23 vm07 bash[28052]: cluster 2026-03-09T21:22:22.218171+0000 mon.a (mon.0) 1207 : cluster [DBG] osdmap e301: 8 total, 5 up, 8 in 2026-03-09T21:22:23.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:23 vm10 bash[23387]: cluster 2026-03-09T21:22:22.192578+0000 mon.a (mon.0) 1205 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T21:22:23.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:23 vm10 bash[23387]: cluster 2026-03-09T21:22:22.192578+0000 mon.a (mon.0) 1205 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T21:22:23.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:23 vm10 bash[23387]: audit 2026-03-09T21:22:22.199395+0000 mon.a (mon.0) 1206 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:23.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:23 vm10 bash[23387]: audit 2026-03-09T21:22:22.199395+0000 mon.a (mon.0) 1206 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:23.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:23 vm10 bash[23387]: cluster 2026-03-09T21:22:22.218171+0000 mon.a (mon.0) 1207 : cluster [DBG] osdmap e301: 8 total, 5 up, 8 in 2026-03-09T21:22:23.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:23 vm10 bash[23387]: cluster 2026-03-09T21:22:22.218171+0000 mon.a (mon.0) 1207 : cluster [DBG] osdmap e301: 8 total, 5 up, 8 in 2026-03-09T21:22:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:24 vm07 bash[20771]: cluster 2026-03-09T21:22:23.199840+0000 mon.a (mon.0) 1208 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T21:22:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:24 vm07 bash[20771]: cluster 2026-03-09T21:22:23.199840+0000 mon.a (mon.0) 1208 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T21:22:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:24 vm07 bash[20771]: cluster 2026-03-09T21:22:23.224342+0000 mon.a (mon.0) 1209 : cluster [INF] osd.0 v2:192.168.123.107:6801/2141296969 boot 2026-03-09T21:22:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:24 vm07 bash[20771]: cluster 2026-03-09T21:22:23.224342+0000 mon.a (mon.0) 1209 : cluster [INF] osd.0 v2:192.168.123.107:6801/2141296969 boot 2026-03-09T21:22:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:24 vm07 bash[20771]: cluster 2026-03-09T21:22:23.224442+0000 mon.a (mon.0) 1210 : cluster [INF] osd.7 v2:192.168.123.110:6812/2049527874 boot 2026-03-09T21:22:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:24 vm07 bash[20771]: cluster 2026-03-09T21:22:23.224442+0000 mon.a (mon.0) 1210 : cluster [INF] osd.7 v2:192.168.123.110:6812/2049527874 boot 2026-03-09T21:22:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:24 vm07 bash[20771]: cluster 2026-03-09T21:22:23.224470+0000 mon.a (mon.0) 1211 : cluster [INF] osd.4 v2:192.168.123.110:6800/4164782911 boot 2026-03-09T21:22:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:24 vm07 bash[20771]: cluster 2026-03-09T21:22:23.224470+0000 mon.a (mon.0) 1211 : cluster [INF] osd.4 v2:192.168.123.110:6800/4164782911 boot 2026-03-09T21:22:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:24 vm07 bash[20771]: cluster 2026-03-09T21:22:23.224489+0000 mon.a (mon.0) 1212 : cluster [DBG] osdmap e302: 8 total, 8 up, 8 in 2026-03-09T21:22:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:24 vm07 bash[20771]: cluster 2026-03-09T21:22:23.224489+0000 mon.a (mon.0) 1212 : cluster [DBG] osdmap e302: 8 total, 8 up, 8 in 2026-03-09T21:22:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:24 vm07 bash[20771]: audit 2026-03-09T21:22:23.238438+0000 mon.c (mon.2) 109 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:22:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:24 vm07 bash[20771]: audit 2026-03-09T21:22:23.238438+0000 mon.c (mon.2) 109 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:22:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:24 vm07 bash[20771]: audit 2026-03-09T21:22:23.238984+0000 mon.c (mon.2) 110 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:22:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:24 vm07 bash[20771]: audit 2026-03-09T21:22:23.238984+0000 mon.c (mon.2) 110 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:22:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:24 vm07 bash[20771]: audit 2026-03-09T21:22:23.239350+0000 mon.c (mon.2) 111 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:22:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:24 vm07 bash[20771]: audit 2026-03-09T21:22:23.239350+0000 mon.c (mon.2) 111 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:22:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:24 vm07 bash[20771]: cluster 2026-03-09T21:22:23.789425+0000 mgr.y (mgr.24416) 241 : cluster [DBG] pgmap v410: 196 pgs: 76 active+undersized, 40 undersized+peered, 7 stale+active+clean, 25 active+undersized+degraded, 17 undersized+degraded+peered, 31 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 216/600 objects degraded (36.000%) 2026-03-09T21:22:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:24 vm07 bash[20771]: cluster 2026-03-09T21:22:23.789425+0000 mgr.y (mgr.24416) 241 : cluster [DBG] pgmap v410: 196 pgs: 76 active+undersized, 40 undersized+peered, 7 stale+active+clean, 25 active+undersized+degraded, 17 undersized+degraded+peered, 31 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 216/600 objects degraded (36.000%) 2026-03-09T21:22:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:24 vm07 bash[20771]: cluster 2026-03-09T21:22:24.217479+0000 mon.a (mon.0) 1213 : cluster [DBG] osdmap e303: 8 total, 8 up, 8 in 2026-03-09T21:22:24.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:24 vm07 bash[20771]: cluster 2026-03-09T21:22:24.217479+0000 mon.a (mon.0) 1213 : cluster [DBG] osdmap e303: 8 total, 8 up, 8 in 2026-03-09T21:22:24.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:24 vm07 bash[28052]: cluster 2026-03-09T21:22:23.199840+0000 mon.a (mon.0) 1208 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T21:22:24.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:24 vm07 bash[28052]: cluster 2026-03-09T21:22:23.199840+0000 mon.a (mon.0) 1208 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T21:22:24.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:24 vm07 bash[28052]: cluster 2026-03-09T21:22:23.224342+0000 mon.a (mon.0) 1209 : cluster [INF] osd.0 v2:192.168.123.107:6801/2141296969 boot 2026-03-09T21:22:24.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:24 vm07 bash[28052]: cluster 2026-03-09T21:22:23.224342+0000 mon.a (mon.0) 1209 : cluster [INF] osd.0 v2:192.168.123.107:6801/2141296969 boot 2026-03-09T21:22:24.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:24 vm07 bash[28052]: cluster 2026-03-09T21:22:23.224442+0000 mon.a (mon.0) 1210 : cluster [INF] osd.7 v2:192.168.123.110:6812/2049527874 boot 2026-03-09T21:22:24.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:24 vm07 bash[28052]: cluster 2026-03-09T21:22:23.224442+0000 mon.a (mon.0) 1210 : cluster [INF] osd.7 v2:192.168.123.110:6812/2049527874 boot 2026-03-09T21:22:24.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:24 vm07 bash[28052]: cluster 2026-03-09T21:22:23.224470+0000 mon.a (mon.0) 1211 : cluster [INF] osd.4 v2:192.168.123.110:6800/4164782911 boot 2026-03-09T21:22:24.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:24 vm07 bash[28052]: cluster 2026-03-09T21:22:23.224470+0000 mon.a (mon.0) 1211 : cluster [INF] osd.4 v2:192.168.123.110:6800/4164782911 boot 2026-03-09T21:22:24.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:24 vm07 bash[28052]: cluster 2026-03-09T21:22:23.224489+0000 mon.a (mon.0) 1212 : cluster [DBG] osdmap e302: 8 total, 8 up, 8 in 2026-03-09T21:22:24.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:24 vm07 bash[28052]: cluster 2026-03-09T21:22:23.224489+0000 mon.a (mon.0) 1212 : cluster [DBG] osdmap e302: 8 total, 8 up, 8 in 2026-03-09T21:22:24.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:24 vm07 bash[28052]: audit 2026-03-09T21:22:23.238438+0000 mon.c (mon.2) 109 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:22:24.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:24 vm07 bash[28052]: audit 2026-03-09T21:22:23.238438+0000 mon.c (mon.2) 109 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:22:24.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:24 vm07 bash[28052]: audit 2026-03-09T21:22:23.238984+0000 mon.c (mon.2) 110 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:22:24.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:24 vm07 bash[28052]: audit 2026-03-09T21:22:23.238984+0000 mon.c (mon.2) 110 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:22:24.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:24 vm07 bash[28052]: audit 2026-03-09T21:22:23.239350+0000 mon.c (mon.2) 111 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:22:24.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:24 vm07 bash[28052]: audit 2026-03-09T21:22:23.239350+0000 mon.c (mon.2) 111 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:22:24.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:24 vm07 bash[28052]: cluster 2026-03-09T21:22:23.789425+0000 mgr.y (mgr.24416) 241 : cluster [DBG] pgmap v410: 196 pgs: 76 active+undersized, 40 undersized+peered, 7 stale+active+clean, 25 active+undersized+degraded, 17 undersized+degraded+peered, 31 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 216/600 objects degraded (36.000%) 2026-03-09T21:22:24.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:24 vm07 bash[28052]: cluster 2026-03-09T21:22:23.789425+0000 mgr.y (mgr.24416) 241 : cluster [DBG] pgmap v410: 196 pgs: 76 active+undersized, 40 undersized+peered, 7 stale+active+clean, 25 active+undersized+degraded, 17 undersized+degraded+peered, 31 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 216/600 objects degraded (36.000%) 2026-03-09T21:22:24.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:24 vm07 bash[28052]: cluster 2026-03-09T21:22:24.217479+0000 mon.a (mon.0) 1213 : cluster [DBG] osdmap e303: 8 total, 8 up, 8 in 2026-03-09T21:22:24.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:24 vm07 bash[28052]: cluster 2026-03-09T21:22:24.217479+0000 mon.a (mon.0) 1213 : cluster [DBG] osdmap e303: 8 total, 8 up, 8 in 2026-03-09T21:22:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:24 vm10 bash[23387]: cluster 2026-03-09T21:22:23.199840+0000 mon.a (mon.0) 1208 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T21:22:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:24 vm10 bash[23387]: cluster 2026-03-09T21:22:23.199840+0000 mon.a (mon.0) 1208 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T21:22:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:24 vm10 bash[23387]: cluster 2026-03-09T21:22:23.224342+0000 mon.a (mon.0) 1209 : cluster [INF] osd.0 v2:192.168.123.107:6801/2141296969 boot 2026-03-09T21:22:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:24 vm10 bash[23387]: cluster 2026-03-09T21:22:23.224342+0000 mon.a (mon.0) 1209 : cluster [INF] osd.0 v2:192.168.123.107:6801/2141296969 boot 2026-03-09T21:22:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:24 vm10 bash[23387]: cluster 2026-03-09T21:22:23.224442+0000 mon.a (mon.0) 1210 : cluster [INF] osd.7 v2:192.168.123.110:6812/2049527874 boot 2026-03-09T21:22:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:24 vm10 bash[23387]: cluster 2026-03-09T21:22:23.224442+0000 mon.a (mon.0) 1210 : cluster [INF] osd.7 v2:192.168.123.110:6812/2049527874 boot 2026-03-09T21:22:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:24 vm10 bash[23387]: cluster 2026-03-09T21:22:23.224470+0000 mon.a (mon.0) 1211 : cluster [INF] osd.4 v2:192.168.123.110:6800/4164782911 boot 2026-03-09T21:22:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:24 vm10 bash[23387]: cluster 2026-03-09T21:22:23.224470+0000 mon.a (mon.0) 1211 : cluster [INF] osd.4 v2:192.168.123.110:6800/4164782911 boot 2026-03-09T21:22:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:24 vm10 bash[23387]: cluster 2026-03-09T21:22:23.224489+0000 mon.a (mon.0) 1212 : cluster [DBG] osdmap e302: 8 total, 8 up, 8 in 2026-03-09T21:22:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:24 vm10 bash[23387]: cluster 2026-03-09T21:22:23.224489+0000 mon.a (mon.0) 1212 : cluster [DBG] osdmap e302: 8 total, 8 up, 8 in 2026-03-09T21:22:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:24 vm10 bash[23387]: audit 2026-03-09T21:22:23.238438+0000 mon.c (mon.2) 109 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:22:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:24 vm10 bash[23387]: audit 2026-03-09T21:22:23.238438+0000 mon.c (mon.2) 109 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T21:22:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:24 vm10 bash[23387]: audit 2026-03-09T21:22:23.238984+0000 mon.c (mon.2) 110 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:22:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:24 vm10 bash[23387]: audit 2026-03-09T21:22:23.238984+0000 mon.c (mon.2) 110 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T21:22:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:24 vm10 bash[23387]: audit 2026-03-09T21:22:23.239350+0000 mon.c (mon.2) 111 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:22:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:24 vm10 bash[23387]: audit 2026-03-09T21:22:23.239350+0000 mon.c (mon.2) 111 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:22:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:24 vm10 bash[23387]: cluster 2026-03-09T21:22:23.789425+0000 mgr.y (mgr.24416) 241 : cluster [DBG] pgmap v410: 196 pgs: 76 active+undersized, 40 undersized+peered, 7 stale+active+clean, 25 active+undersized+degraded, 17 undersized+degraded+peered, 31 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 216/600 objects degraded (36.000%) 2026-03-09T21:22:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:24 vm10 bash[23387]: cluster 2026-03-09T21:22:23.789425+0000 mgr.y (mgr.24416) 241 : cluster [DBG] pgmap v410: 196 pgs: 76 active+undersized, 40 undersized+peered, 7 stale+active+clean, 25 active+undersized+degraded, 17 undersized+degraded+peered, 31 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 216/600 objects degraded (36.000%) 2026-03-09T21:22:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:24 vm10 bash[23387]: cluster 2026-03-09T21:22:24.217479+0000 mon.a (mon.0) 1213 : cluster [DBG] osdmap e303: 8 total, 8 up, 8 in 2026-03-09T21:22:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:24 vm10 bash[23387]: cluster 2026-03-09T21:22:24.217479+0000 mon.a (mon.0) 1213 : cluster [DBG] osdmap e303: 8 total, 8 up, 8 in 2026-03-09T21:22:25.614 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:25 vm07 bash[20771]: cluster 2026-03-09T21:22:24.240103+0000 mon.a (mon.0) 1214 : cluster [WRN] Health check failed: Reduced data availability: 22 pgs inactive (PG_AVAILABILITY) 2026-03-09T21:22:25.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:25 vm07 bash[20771]: cluster 2026-03-09T21:22:24.240103+0000 mon.a (mon.0) 1214 : cluster [WRN] Health check failed: Reduced data availability: 22 pgs inactive (PG_AVAILABILITY) 2026-03-09T21:22:25.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:25 vm07 bash[20771]: cluster 2026-03-09T21:22:24.240127+0000 mon.a (mon.0) 1215 : cluster [WRN] Health check failed: Degraded data redundancy: 216/600 objects degraded (36.000%), 42 pgs degraded (PG_DEGRADED) 2026-03-09T21:22:25.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:25 vm07 bash[20771]: cluster 2026-03-09T21:22:24.240127+0000 mon.a (mon.0) 1215 : cluster [WRN] Health check failed: Degraded data redundancy: 216/600 objects degraded (36.000%), 42 pgs degraded (PG_DEGRADED) 2026-03-09T21:22:25.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:25 vm07 bash[20771]: audit 2026-03-09T21:22:25.132202+0000 mon.a (mon.0) 1216 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:25.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:25 vm07 bash[20771]: audit 2026-03-09T21:22:25.132202+0000 mon.a (mon.0) 1216 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:25.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:25 vm07 bash[28052]: cluster 2026-03-09T21:22:24.240103+0000 mon.a (mon.0) 1214 : cluster [WRN] Health check failed: Reduced data availability: 22 pgs inactive (PG_AVAILABILITY) 2026-03-09T21:22:25.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:25 vm07 bash[28052]: cluster 2026-03-09T21:22:24.240103+0000 mon.a (mon.0) 1214 : cluster [WRN] Health check failed: Reduced data availability: 22 pgs inactive (PG_AVAILABILITY) 2026-03-09T21:22:25.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:25 vm07 bash[28052]: cluster 2026-03-09T21:22:24.240127+0000 mon.a (mon.0) 1215 : cluster [WRN] Health check failed: Degraded data redundancy: 216/600 objects degraded (36.000%), 42 pgs degraded (PG_DEGRADED) 2026-03-09T21:22:25.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:25 vm07 bash[28052]: cluster 2026-03-09T21:22:24.240127+0000 mon.a (mon.0) 1215 : cluster [WRN] Health check failed: Degraded data redundancy: 216/600 objects degraded (36.000%), 42 pgs degraded (PG_DEGRADED) 2026-03-09T21:22:25.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:25 vm07 bash[28052]: audit 2026-03-09T21:22:25.132202+0000 mon.a (mon.0) 1216 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:25.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:25 vm07 bash[28052]: audit 2026-03-09T21:22:25.132202+0000 mon.a (mon.0) 1216 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:25.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:25 vm10 bash[23387]: cluster 2026-03-09T21:22:24.240103+0000 mon.a (mon.0) 1214 : cluster [WRN] Health check failed: Reduced data availability: 22 pgs inactive (PG_AVAILABILITY) 2026-03-09T21:22:25.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:25 vm10 bash[23387]: cluster 2026-03-09T21:22:24.240103+0000 mon.a (mon.0) 1214 : cluster [WRN] Health check failed: Reduced data availability: 22 pgs inactive (PG_AVAILABILITY) 2026-03-09T21:22:25.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:25 vm10 bash[23387]: cluster 2026-03-09T21:22:24.240127+0000 mon.a (mon.0) 1215 : cluster [WRN] Health check failed: Degraded data redundancy: 216/600 objects degraded (36.000%), 42 pgs degraded (PG_DEGRADED) 2026-03-09T21:22:25.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:25 vm10 bash[23387]: cluster 2026-03-09T21:22:24.240127+0000 mon.a (mon.0) 1215 : cluster [WRN] Health check failed: Degraded data redundancy: 216/600 objects degraded (36.000%), 42 pgs degraded (PG_DEGRADED) 2026-03-09T21:22:25.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:25 vm10 bash[23387]: audit 2026-03-09T21:22:25.132202+0000 mon.a (mon.0) 1216 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:25.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:25 vm10 bash[23387]: audit 2026-03-09T21:22:25.132202+0000 mon.a (mon.0) 1216 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:26.377 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_read_wait_for_complete PASSED [ 76%] 2026-03-09T21:22:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:26 vm10 bash[23387]: audit 2026-03-09T21:22:25.255764+0000 mon.a (mon.0) 1217 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:26 vm10 bash[23387]: audit 2026-03-09T21:22:25.255764+0000 mon.a (mon.0) 1217 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:26 vm10 bash[23387]: cluster 2026-03-09T21:22:25.258709+0000 mon.a (mon.0) 1218 : cluster [DBG] osdmap e304: 8 total, 8 up, 8 in 2026-03-09T21:22:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:26 vm10 bash[23387]: cluster 2026-03-09T21:22:25.258709+0000 mon.a (mon.0) 1218 : cluster [DBG] osdmap e304: 8 total, 8 up, 8 in 2026-03-09T21:22:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:26 vm10 bash[23387]: audit 2026-03-09T21:22:25.536432+0000 mon.c (mon.2) 112 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:22:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:26 vm10 bash[23387]: audit 2026-03-09T21:22:25.536432+0000 mon.c (mon.2) 112 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:22:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:26 vm10 bash[23387]: cluster 2026-03-09T21:22:25.789895+0000 mgr.y (mgr.24416) 242 : cluster [DBG] pgmap v413: 196 pgs: 76 active+undersized, 40 undersized+peered, 7 stale+active+clean, 25 active+undersized+degraded, 17 undersized+degraded+peered, 31 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 216/600 objects degraded (36.000%) 2026-03-09T21:22:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:26 vm10 bash[23387]: cluster 2026-03-09T21:22:25.789895+0000 mgr.y (mgr.24416) 242 : cluster [DBG] pgmap v413: 196 pgs: 76 active+undersized, 40 undersized+peered, 7 stale+active+clean, 25 active+undersized+degraded, 17 undersized+degraded+peered, 31 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 216/600 objects degraded (36.000%) 2026-03-09T21:22:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:26 vm10 bash[23387]: audit 2026-03-09T21:22:25.881708+0000 mon.c (mon.2) 113 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:22:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:26 vm10 bash[23387]: audit 2026-03-09T21:22:25.881708+0000 mon.c (mon.2) 113 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:22:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:26 vm10 bash[23387]: audit 2026-03-09T21:22:25.882919+0000 mon.c (mon.2) 114 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:22:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:26 vm10 bash[23387]: audit 2026-03-09T21:22:25.882919+0000 mon.c (mon.2) 114 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:22:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:26 vm10 bash[23387]: audit 2026-03-09T21:22:25.889035+0000 mon.a (mon.0) 1219 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:22:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:26 vm10 bash[23387]: audit 2026-03-09T21:22:25.889035+0000 mon.a (mon.0) 1219 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:22:26.693 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:22:26 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:22:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:26 vm07 bash[20771]: audit 2026-03-09T21:22:25.255764+0000 mon.a (mon.0) 1217 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:26 vm07 bash[20771]: audit 2026-03-09T21:22:25.255764+0000 mon.a (mon.0) 1217 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:26 vm07 bash[20771]: cluster 2026-03-09T21:22:25.258709+0000 mon.a (mon.0) 1218 : cluster [DBG] osdmap e304: 8 total, 8 up, 8 in 2026-03-09T21:22:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:26 vm07 bash[20771]: cluster 2026-03-09T21:22:25.258709+0000 mon.a (mon.0) 1218 : cluster [DBG] osdmap e304: 8 total, 8 up, 8 in 2026-03-09T21:22:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:26 vm07 bash[20771]: audit 2026-03-09T21:22:25.536432+0000 mon.c (mon.2) 112 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:22:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:26 vm07 bash[20771]: audit 2026-03-09T21:22:25.536432+0000 mon.c (mon.2) 112 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:22:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:26 vm07 bash[20771]: cluster 2026-03-09T21:22:25.789895+0000 mgr.y (mgr.24416) 242 : cluster [DBG] pgmap v413: 196 pgs: 76 active+undersized, 40 undersized+peered, 7 stale+active+clean, 25 active+undersized+degraded, 17 undersized+degraded+peered, 31 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 216/600 objects degraded (36.000%) 2026-03-09T21:22:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:26 vm07 bash[20771]: cluster 2026-03-09T21:22:25.789895+0000 mgr.y (mgr.24416) 242 : cluster [DBG] pgmap v413: 196 pgs: 76 active+undersized, 40 undersized+peered, 7 stale+active+clean, 25 active+undersized+degraded, 17 undersized+degraded+peered, 31 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 216/600 objects degraded (36.000%) 2026-03-09T21:22:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:26 vm07 bash[20771]: audit 2026-03-09T21:22:25.881708+0000 mon.c (mon.2) 113 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:22:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:26 vm07 bash[20771]: audit 2026-03-09T21:22:25.881708+0000 mon.c (mon.2) 113 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:22:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:26 vm07 bash[20771]: audit 2026-03-09T21:22:25.882919+0000 mon.c (mon.2) 114 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:22:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:26 vm07 bash[20771]: audit 2026-03-09T21:22:25.882919+0000 mon.c (mon.2) 114 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:22:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:26 vm07 bash[20771]: audit 2026-03-09T21:22:25.889035+0000 mon.a (mon.0) 1219 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:22:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:26 vm07 bash[20771]: audit 2026-03-09T21:22:25.889035+0000 mon.a (mon.0) 1219 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:22:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:26 vm07 bash[28052]: audit 2026-03-09T21:22:25.255764+0000 mon.a (mon.0) 1217 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:26 vm07 bash[28052]: audit 2026-03-09T21:22:25.255764+0000 mon.a (mon.0) 1217 : audit [INF] from='client.? 192.168.123.107:0/222552207' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:26 vm07 bash[28052]: cluster 2026-03-09T21:22:25.258709+0000 mon.a (mon.0) 1218 : cluster [DBG] osdmap e304: 8 total, 8 up, 8 in 2026-03-09T21:22:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:26 vm07 bash[28052]: cluster 2026-03-09T21:22:25.258709+0000 mon.a (mon.0) 1218 : cluster [DBG] osdmap e304: 8 total, 8 up, 8 in 2026-03-09T21:22:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:26 vm07 bash[28052]: audit 2026-03-09T21:22:25.536432+0000 mon.c (mon.2) 112 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:22:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:26 vm07 bash[28052]: audit 2026-03-09T21:22:25.536432+0000 mon.c (mon.2) 112 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:22:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:26 vm07 bash[28052]: cluster 2026-03-09T21:22:25.789895+0000 mgr.y (mgr.24416) 242 : cluster [DBG] pgmap v413: 196 pgs: 76 active+undersized, 40 undersized+peered, 7 stale+active+clean, 25 active+undersized+degraded, 17 undersized+degraded+peered, 31 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 216/600 objects degraded (36.000%) 2026-03-09T21:22:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:26 vm07 bash[28052]: cluster 2026-03-09T21:22:25.789895+0000 mgr.y (mgr.24416) 242 : cluster [DBG] pgmap v413: 196 pgs: 76 active+undersized, 40 undersized+peered, 7 stale+active+clean, 25 active+undersized+degraded, 17 undersized+degraded+peered, 31 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 216/600 objects degraded (36.000%) 2026-03-09T21:22:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:26 vm07 bash[28052]: audit 2026-03-09T21:22:25.881708+0000 mon.c (mon.2) 113 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:22:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:26 vm07 bash[28052]: audit 2026-03-09T21:22:25.881708+0000 mon.c (mon.2) 113 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:22:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:26 vm07 bash[28052]: audit 2026-03-09T21:22:25.882919+0000 mon.c (mon.2) 114 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:22:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:26 vm07 bash[28052]: audit 2026-03-09T21:22:25.882919+0000 mon.c (mon.2) 114 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:22:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:26 vm07 bash[28052]: audit 2026-03-09T21:22:25.889035+0000 mon.a (mon.0) 1219 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:22:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:26 vm07 bash[28052]: audit 2026-03-09T21:22:25.889035+0000 mon.a (mon.0) 1219 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:22:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:27 vm10 bash[23387]: cluster 2026-03-09T21:22:26.371697+0000 mon.a (mon.0) 1220 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-09T21:22:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:27 vm10 bash[23387]: cluster 2026-03-09T21:22:26.371697+0000 mon.a (mon.0) 1220 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-09T21:22:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:27 vm10 bash[23387]: audit 2026-03-09T21:22:26.446061+0000 mgr.y (mgr.24416) 243 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:27 vm10 bash[23387]: audit 2026-03-09T21:22:26.446061+0000 mgr.y (mgr.24416) 243 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:27 vm10 bash[23387]: audit 2026-03-09T21:22:27.047322+0000 mon.a (mon.0) 1221 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:22:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:27 vm10 bash[23387]: audit 2026-03-09T21:22:27.047322+0000 mon.a (mon.0) 1221 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:22:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:27 vm10 bash[23387]: audit 2026-03-09T21:22:27.048479+0000 mon.c (mon.2) 115 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:22:27.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:27 vm10 bash[23387]: audit 2026-03-09T21:22:27.048479+0000 mon.c (mon.2) 115 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:22:27.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:27 vm07 bash[20771]: cluster 2026-03-09T21:22:26.371697+0000 mon.a (mon.0) 1220 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-09T21:22:27.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:27 vm07 bash[20771]: cluster 2026-03-09T21:22:26.371697+0000 mon.a (mon.0) 1220 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-09T21:22:27.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:27 vm07 bash[20771]: audit 2026-03-09T21:22:26.446061+0000 mgr.y (mgr.24416) 243 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:27.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:27 vm07 bash[20771]: audit 2026-03-09T21:22:26.446061+0000 mgr.y (mgr.24416) 243 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:27.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:27 vm07 bash[20771]: audit 2026-03-09T21:22:27.047322+0000 mon.a (mon.0) 1221 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:22:27.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:27 vm07 bash[20771]: audit 2026-03-09T21:22:27.047322+0000 mon.a (mon.0) 1221 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:22:27.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:27 vm07 bash[20771]: audit 2026-03-09T21:22:27.048479+0000 mon.c (mon.2) 115 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:22:27.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:27 vm07 bash[20771]: audit 2026-03-09T21:22:27.048479+0000 mon.c (mon.2) 115 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:22:27.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:27 vm07 bash[28052]: cluster 2026-03-09T21:22:26.371697+0000 mon.a (mon.0) 1220 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-09T21:22:27.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:27 vm07 bash[28052]: cluster 2026-03-09T21:22:26.371697+0000 mon.a (mon.0) 1220 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-09T21:22:27.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:27 vm07 bash[28052]: audit 2026-03-09T21:22:26.446061+0000 mgr.y (mgr.24416) 243 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:27.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:27 vm07 bash[28052]: audit 2026-03-09T21:22:26.446061+0000 mgr.y (mgr.24416) 243 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:27.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:27 vm07 bash[28052]: audit 2026-03-09T21:22:27.047322+0000 mon.a (mon.0) 1221 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:22:27.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:27 vm07 bash[28052]: audit 2026-03-09T21:22:27.047322+0000 mon.a (mon.0) 1221 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:22:27.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:27 vm07 bash[28052]: audit 2026-03-09T21:22:27.048479+0000 mon.c (mon.2) 115 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:22:27.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:27 vm07 bash[28052]: audit 2026-03-09T21:22:27.048479+0000 mon.c (mon.2) 115 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:22:28.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:28 vm10 bash[23387]: cluster 2026-03-09T21:22:27.393977+0000 mon.a (mon.0) 1222 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-09T21:22:28.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:28 vm10 bash[23387]: cluster 2026-03-09T21:22:27.393977+0000 mon.a (mon.0) 1222 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-09T21:22:28.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:28 vm10 bash[23387]: cluster 2026-03-09T21:22:27.790387+0000 mgr.y (mgr.24416) 244 : cluster [DBG] pgmap v416: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 6.5 KiB/s rd, 6 op/s 2026-03-09T21:22:28.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:28 vm10 bash[23387]: cluster 2026-03-09T21:22:27.790387+0000 mgr.y (mgr.24416) 244 : cluster [DBG] pgmap v416: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 6.5 KiB/s rd, 6 op/s 2026-03-09T21:22:28.865 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:22:28 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:22:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:22:28.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:28 vm07 bash[28052]: cluster 2026-03-09T21:22:27.393977+0000 mon.a (mon.0) 1222 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-09T21:22:28.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:28 vm07 bash[28052]: cluster 2026-03-09T21:22:27.393977+0000 mon.a (mon.0) 1222 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-09T21:22:28.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:28 vm07 bash[28052]: cluster 2026-03-09T21:22:27.790387+0000 mgr.y (mgr.24416) 244 : cluster [DBG] pgmap v416: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 6.5 KiB/s rd, 6 op/s 2026-03-09T21:22:28.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:28 vm07 bash[28052]: cluster 2026-03-09T21:22:27.790387+0000 mgr.y (mgr.24416) 244 : cluster [DBG] pgmap v416: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 6.5 KiB/s rd, 6 op/s 2026-03-09T21:22:28.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:28 vm07 bash[20771]: cluster 2026-03-09T21:22:27.393977+0000 mon.a (mon.0) 1222 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-09T21:22:28.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:28 vm07 bash[20771]: cluster 2026-03-09T21:22:27.393977+0000 mon.a (mon.0) 1222 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-09T21:22:28.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:28 vm07 bash[20771]: cluster 2026-03-09T21:22:27.790387+0000 mgr.y (mgr.24416) 244 : cluster [DBG] pgmap v416: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 6.5 KiB/s rd, 6 op/s 2026-03-09T21:22:28.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:28 vm07 bash[20771]: cluster 2026-03-09T21:22:27.790387+0000 mgr.y (mgr.24416) 244 : cluster [DBG] pgmap v416: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 6.5 KiB/s rd, 6 op/s 2026-03-09T21:22:29.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:29 vm10 bash[23387]: cluster 2026-03-09T21:22:28.401496+0000 mon.a (mon.0) 1223 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-09T21:22:29.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:29 vm10 bash[23387]: cluster 2026-03-09T21:22:28.401496+0000 mon.a (mon.0) 1223 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-09T21:22:29.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:29 vm10 bash[23387]: cluster 2026-03-09T21:22:28.416770+0000 mon.a (mon.0) 1224 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 22 pgs inactive) 2026-03-09T21:22:29.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:29 vm10 bash[23387]: cluster 2026-03-09T21:22:28.416770+0000 mon.a (mon.0) 1224 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 22 pgs inactive) 2026-03-09T21:22:29.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:29 vm10 bash[23387]: cluster 2026-03-09T21:22:28.416787+0000 mon.a (mon.0) 1225 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 216/600 objects degraded (36.000%), 42 pgs degraded) 2026-03-09T21:22:29.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:29 vm10 bash[23387]: cluster 2026-03-09T21:22:28.416787+0000 mon.a (mon.0) 1225 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 216/600 objects degraded (36.000%), 42 pgs degraded) 2026-03-09T21:22:29.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:29 vm10 bash[23387]: audit 2026-03-09T21:22:28.432403+0000 mon.b (mon.1) 62 : audit [DBG] from='client.? 192.168.123.107:0/4168870384' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-09T21:22:29.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:29 vm10 bash[23387]: audit 2026-03-09T21:22:28.432403+0000 mon.b (mon.1) 62 : audit [DBG] from='client.? 192.168.123.107:0/4168870384' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-09T21:22:29.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:29 vm10 bash[23387]: audit 2026-03-09T21:22:28.434404+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.107:0/4168870384' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T21:22:29.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:29 vm10 bash[23387]: audit 2026-03-09T21:22:28.434404+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.107:0/4168870384' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T21:22:29.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:29 vm10 bash[23387]: audit 2026-03-09T21:22:28.435121+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T21:22:29.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:29 vm10 bash[23387]: audit 2026-03-09T21:22:28.435121+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T21:22:29.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:29 vm10 bash[23387]: cluster 2026-03-09T21:22:29.387955+0000 mon.a (mon.0) 1227 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T21:22:29.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:29 vm10 bash[23387]: cluster 2026-03-09T21:22:29.387955+0000 mon.a (mon.0) 1227 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T21:22:29.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:29 vm10 bash[23387]: audit 2026-03-09T21:22:29.390950+0000 mon.a (mon.0) 1228 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T21:22:29.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:29 vm10 bash[23387]: audit 2026-03-09T21:22:29.390950+0000 mon.a (mon.0) 1228 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T21:22:29.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:29 vm10 bash[23387]: cluster 2026-03-09T21:22:29.395618+0000 mon.a (mon.0) 1229 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-09T21:22:29.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:29 vm10 bash[23387]: cluster 2026-03-09T21:22:29.395618+0000 mon.a (mon.0) 1229 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-09T21:22:29.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:29 vm10 bash[23387]: audit 2026-03-09T21:22:29.397832+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.107:0/4168870384' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-09T21:22:29.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:29 vm10 bash[23387]: audit 2026-03-09T21:22:29.397832+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.107:0/4168870384' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-09T21:22:29.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:29 vm10 bash[23387]: audit 2026-03-09T21:22:29.400265+0000 mon.a (mon.0) 1230 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-09T21:22:29.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:29 vm10 bash[23387]: audit 2026-03-09T21:22:29.400265+0000 mon.a (mon.0) 1230 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:29 vm07 bash[20771]: debug 2026-03-09T21:22:29.397+0000 7efed56c7640 -1 mon.a@0(leader).osd e308 definitely_dead 0 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:29 vm07 bash[20771]: cluster 2026-03-09T21:22:28.401496+0000 mon.a (mon.0) 1223 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:29 vm07 bash[20771]: cluster 2026-03-09T21:22:28.401496+0000 mon.a (mon.0) 1223 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:29 vm07 bash[20771]: cluster 2026-03-09T21:22:28.416770+0000 mon.a (mon.0) 1224 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 22 pgs inactive) 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:29 vm07 bash[20771]: cluster 2026-03-09T21:22:28.416770+0000 mon.a (mon.0) 1224 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 22 pgs inactive) 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:29 vm07 bash[20771]: cluster 2026-03-09T21:22:28.416787+0000 mon.a (mon.0) 1225 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 216/600 objects degraded (36.000%), 42 pgs degraded) 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:29 vm07 bash[20771]: cluster 2026-03-09T21:22:28.416787+0000 mon.a (mon.0) 1225 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 216/600 objects degraded (36.000%), 42 pgs degraded) 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:29 vm07 bash[20771]: audit 2026-03-09T21:22:28.432403+0000 mon.b (mon.1) 62 : audit [DBG] from='client.? 192.168.123.107:0/4168870384' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:29 vm07 bash[20771]: audit 2026-03-09T21:22:28.432403+0000 mon.b (mon.1) 62 : audit [DBG] from='client.? 192.168.123.107:0/4168870384' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:29 vm07 bash[20771]: audit 2026-03-09T21:22:28.434404+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.107:0/4168870384' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:29 vm07 bash[20771]: audit 2026-03-09T21:22:28.434404+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.107:0/4168870384' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:29 vm07 bash[20771]: audit 2026-03-09T21:22:28.435121+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:29 vm07 bash[20771]: audit 2026-03-09T21:22:28.435121+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:29 vm07 bash[20771]: cluster 2026-03-09T21:22:29.387955+0000 mon.a (mon.0) 1227 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:29 vm07 bash[20771]: cluster 2026-03-09T21:22:29.387955+0000 mon.a (mon.0) 1227 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:29 vm07 bash[20771]: audit 2026-03-09T21:22:29.390950+0000 mon.a (mon.0) 1228 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:29 vm07 bash[20771]: audit 2026-03-09T21:22:29.390950+0000 mon.a (mon.0) 1228 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:29 vm07 bash[20771]: cluster 2026-03-09T21:22:29.395618+0000 mon.a (mon.0) 1229 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:29 vm07 bash[20771]: cluster 2026-03-09T21:22:29.395618+0000 mon.a (mon.0) 1229 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:29 vm07 bash[20771]: audit 2026-03-09T21:22:29.397832+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.107:0/4168870384' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:29 vm07 bash[20771]: audit 2026-03-09T21:22:29.397832+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.107:0/4168870384' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:29 vm07 bash[20771]: audit 2026-03-09T21:22:29.400265+0000 mon.a (mon.0) 1230 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:29 vm07 bash[20771]: audit 2026-03-09T21:22:29.400265+0000 mon.a (mon.0) 1230 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:29 vm07 bash[28052]: cluster 2026-03-09T21:22:28.401496+0000 mon.a (mon.0) 1223 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:29 vm07 bash[28052]: cluster 2026-03-09T21:22:28.401496+0000 mon.a (mon.0) 1223 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:29 vm07 bash[28052]: cluster 2026-03-09T21:22:28.416770+0000 mon.a (mon.0) 1224 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 22 pgs inactive) 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:29 vm07 bash[28052]: cluster 2026-03-09T21:22:28.416770+0000 mon.a (mon.0) 1224 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 22 pgs inactive) 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:29 vm07 bash[28052]: cluster 2026-03-09T21:22:28.416787+0000 mon.a (mon.0) 1225 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 216/600 objects degraded (36.000%), 42 pgs degraded) 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:29 vm07 bash[28052]: cluster 2026-03-09T21:22:28.416787+0000 mon.a (mon.0) 1225 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 216/600 objects degraded (36.000%), 42 pgs degraded) 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:29 vm07 bash[28052]: audit 2026-03-09T21:22:28.432403+0000 mon.b (mon.1) 62 : audit [DBG] from='client.? 192.168.123.107:0/4168870384' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:29 vm07 bash[28052]: audit 2026-03-09T21:22:28.432403+0000 mon.b (mon.1) 62 : audit [DBG] from='client.? 192.168.123.107:0/4168870384' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:29 vm07 bash[28052]: audit 2026-03-09T21:22:28.434404+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.107:0/4168870384' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:29 vm07 bash[28052]: audit 2026-03-09T21:22:28.434404+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.107:0/4168870384' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:29 vm07 bash[28052]: audit 2026-03-09T21:22:28.435121+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:29 vm07 bash[28052]: audit 2026-03-09T21:22:28.435121+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:29 vm07 bash[28052]: cluster 2026-03-09T21:22:29.387955+0000 mon.a (mon.0) 1227 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:29 vm07 bash[28052]: cluster 2026-03-09T21:22:29.387955+0000 mon.a (mon.0) 1227 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T21:22:29.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:29 vm07 bash[28052]: audit 2026-03-09T21:22:29.390950+0000 mon.a (mon.0) 1228 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T21:22:29.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:29 vm07 bash[28052]: audit 2026-03-09T21:22:29.390950+0000 mon.a (mon.0) 1228 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T21:22:29.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:29 vm07 bash[28052]: cluster 2026-03-09T21:22:29.395618+0000 mon.a (mon.0) 1229 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-09T21:22:29.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:29 vm07 bash[28052]: cluster 2026-03-09T21:22:29.395618+0000 mon.a (mon.0) 1229 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-09T21:22:29.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:29 vm07 bash[28052]: audit 2026-03-09T21:22:29.397832+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.107:0/4168870384' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-09T21:22:29.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:29 vm07 bash[28052]: audit 2026-03-09T21:22:29.397832+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.107:0/4168870384' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-09T21:22:29.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:29 vm07 bash[28052]: audit 2026-03-09T21:22:29.400265+0000 mon.a (mon.0) 1230 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-09T21:22:29.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:29 vm07 bash[28052]: audit 2026-03-09T21:22:29.400265+0000 mon.a (mon.0) 1230 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-09T21:22:30.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:30 vm07 bash[20771]: cluster 2026-03-09T21:22:29.790770+0000 mgr.y (mgr.24416) 245 : cluster [DBG] pgmap v419: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 6.5 KiB/s rd, 6 op/s 2026-03-09T21:22:30.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:30 vm07 bash[20771]: cluster 2026-03-09T21:22:29.790770+0000 mgr.y (mgr.24416) 245 : cluster [DBG] pgmap v419: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 6.5 KiB/s rd, 6 op/s 2026-03-09T21:22:30.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:30 vm07 bash[20771]: cluster 2026-03-09T21:22:30.391356+0000 mon.a (mon.0) 1231 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T21:22:30.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:30 vm07 bash[20771]: cluster 2026-03-09T21:22:30.391356+0000 mon.a (mon.0) 1231 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T21:22:30.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:30 vm07 bash[28052]: cluster 2026-03-09T21:22:29.790770+0000 mgr.y (mgr.24416) 245 : cluster [DBG] pgmap v419: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 6.5 KiB/s rd, 6 op/s 2026-03-09T21:22:30.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:30 vm07 bash[28052]: cluster 2026-03-09T21:22:29.790770+0000 mgr.y (mgr.24416) 245 : cluster [DBG] pgmap v419: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 6.5 KiB/s rd, 6 op/s 2026-03-09T21:22:30.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:30 vm07 bash[28052]: cluster 2026-03-09T21:22:30.391356+0000 mon.a (mon.0) 1231 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T21:22:30.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:30 vm07 bash[28052]: cluster 2026-03-09T21:22:30.391356+0000 mon.a (mon.0) 1231 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T21:22:30.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:30 vm10 bash[23387]: cluster 2026-03-09T21:22:29.790770+0000 mgr.y (mgr.24416) 245 : cluster [DBG] pgmap v419: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 6.5 KiB/s rd, 6 op/s 2026-03-09T21:22:30.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:30 vm10 bash[23387]: cluster 2026-03-09T21:22:29.790770+0000 mgr.y (mgr.24416) 245 : cluster [DBG] pgmap v419: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 6.5 KiB/s rd, 6 op/s 2026-03-09T21:22:30.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:30 vm10 bash[23387]: cluster 2026-03-09T21:22:30.391356+0000 mon.a (mon.0) 1231 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T21:22:30.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:30 vm10 bash[23387]: cluster 2026-03-09T21:22:30.391356+0000 mon.a (mon.0) 1231 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T21:22:31.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:31 vm07 bash[20771]: audit 2026-03-09T21:22:30.426440+0000 mon.a (mon.0) 1232 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["2", "5", "7"]}]': finished 2026-03-09T21:22:31.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:31 vm07 bash[20771]: audit 2026-03-09T21:22:30.426440+0000 mon.a (mon.0) 1232 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["2", "5", "7"]}]': finished 2026-03-09T21:22:31.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:31 vm07 bash[20771]: cluster 2026-03-09T21:22:30.432687+0000 mon.a (mon.0) 1233 : cluster [DBG] osdmap e309: 8 total, 5 up, 8 in 2026-03-09T21:22:31.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:31 vm07 bash[20771]: cluster 2026-03-09T21:22:30.432687+0000 mon.a (mon.0) 1233 : cluster [DBG] osdmap e309: 8 total, 5 up, 8 in 2026-03-09T21:22:31.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:31 vm07 bash[20771]: cluster 2026-03-09T21:22:31.413830+0000 mon.a (mon.0) 1234 : cluster [INF] osd.7 marked itself dead as of e309 2026-03-09T21:22:31.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:31 vm07 bash[20771]: cluster 2026-03-09T21:22:31.413830+0000 mon.a (mon.0) 1234 : cluster [INF] osd.7 marked itself dead as of e309 2026-03-09T21:22:31.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:31 vm07 bash[28052]: audit 2026-03-09T21:22:30.426440+0000 mon.a (mon.0) 1232 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["2", "5", "7"]}]': finished 2026-03-09T21:22:31.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:31 vm07 bash[28052]: audit 2026-03-09T21:22:30.426440+0000 mon.a (mon.0) 1232 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["2", "5", "7"]}]': finished 2026-03-09T21:22:31.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:31 vm07 bash[28052]: cluster 2026-03-09T21:22:30.432687+0000 mon.a (mon.0) 1233 : cluster [DBG] osdmap e309: 8 total, 5 up, 8 in 2026-03-09T21:22:31.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:31 vm07 bash[28052]: cluster 2026-03-09T21:22:30.432687+0000 mon.a (mon.0) 1233 : cluster [DBG] osdmap e309: 8 total, 5 up, 8 in 2026-03-09T21:22:31.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:31 vm07 bash[28052]: cluster 2026-03-09T21:22:31.413830+0000 mon.a (mon.0) 1234 : cluster [INF] osd.7 marked itself dead as of e309 2026-03-09T21:22:31.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:31 vm07 bash[28052]: cluster 2026-03-09T21:22:31.413830+0000 mon.a (mon.0) 1234 : cluster [INF] osd.7 marked itself dead as of e309 2026-03-09T21:22:31.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:31 vm10 bash[23387]: audit 2026-03-09T21:22:30.426440+0000 mon.a (mon.0) 1232 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["2", "5", "7"]}]': finished 2026-03-09T21:22:31.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:31 vm10 bash[23387]: audit 2026-03-09T21:22:30.426440+0000 mon.a (mon.0) 1232 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["2", "5", "7"]}]': finished 2026-03-09T21:22:31.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:31 vm10 bash[23387]: cluster 2026-03-09T21:22:30.432687+0000 mon.a (mon.0) 1233 : cluster [DBG] osdmap e309: 8 total, 5 up, 8 in 2026-03-09T21:22:31.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:31 vm10 bash[23387]: cluster 2026-03-09T21:22:30.432687+0000 mon.a (mon.0) 1233 : cluster [DBG] osdmap e309: 8 total, 5 up, 8 in 2026-03-09T21:22:31.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:31 vm10 bash[23387]: cluster 2026-03-09T21:22:31.413830+0000 mon.a (mon.0) 1234 : cluster [INF] osd.7 marked itself dead as of e309 2026-03-09T21:22:31.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:31 vm10 bash[23387]: cluster 2026-03-09T21:22:31.413830+0000 mon.a (mon.0) 1234 : cluster [INF] osd.7 marked itself dead as of e309 2026-03-09T21:22:32.442 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:22:32 vm10 bash[44771]: debug 2026-03-09T21:22:32.001+0000 7fa1eb2c9640 -1 osd.7 310 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T21:22:32.865 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 21:22:32 vm07 bash[42797]: debug 2026-03-09T21:22:32.485+0000 7f2909eb8640 -1 osd.2 310 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T21:22:32.865 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 21:22:32 vm07 bash[42797]: debug 2026-03-09T21:22:32.629+0000 7f28fca8e640 -1 osd.2 311 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:32 vm07 bash[20771]: cluster 2026-03-09T21:22:31.412695+0000 osd.7 (osd.7) 5 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:32 vm07 bash[20771]: cluster 2026-03-09T21:22:31.412695+0000 osd.7 (osd.7) 5 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:32 vm07 bash[20771]: cluster 2026-03-09T21:22:31.412698+0000 osd.7 (osd.7) 6 : cluster [DBG] map e309 wrongly marked me down at e309 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:32 vm07 bash[20771]: cluster 2026-03-09T21:22:31.412698+0000 osd.7 (osd.7) 6 : cluster [DBG] map e309 wrongly marked me down at e309 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:32 vm07 bash[20771]: cluster 2026-03-09T21:22:31.538634+0000 mon.a (mon.0) 1235 : cluster [DBG] osdmap e310: 8 total, 5 up, 8 in 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:32 vm07 bash[20771]: cluster 2026-03-09T21:22:31.538634+0000 mon.a (mon.0) 1235 : cluster [DBG] osdmap e310: 8 total, 5 up, 8 in 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:32 vm07 bash[20771]: cluster 2026-03-09T21:22:31.791132+0000 mgr.y (mgr.24416) 246 : cluster [DBG] pgmap v422: 196 pgs: 16 stale+creating+peering, 61 stale+active+clean, 16 creating+peering, 103 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:32 vm07 bash[20771]: cluster 2026-03-09T21:22:31.791132+0000 mgr.y (mgr.24416) 246 : cluster [DBG] pgmap v422: 196 pgs: 16 stale+creating+peering, 61 stale+active+clean, 16 creating+peering, 103 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:32 vm07 bash[20771]: cluster 2026-03-09T21:22:32.272793+0000 osd.2 (osd.2) 3 : cluster [WRN] Monitor daemon marked osd.2 down, but it is still running 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:32 vm07 bash[20771]: cluster 2026-03-09T21:22:32.272793+0000 osd.2 (osd.2) 3 : cluster [WRN] Monitor daemon marked osd.2 down, but it is still running 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:32 vm07 bash[20771]: cluster 2026-03-09T21:22:32.272795+0000 osd.2 (osd.2) 4 : cluster [DBG] map e310 wrongly marked me down at e309 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:32 vm07 bash[20771]: cluster 2026-03-09T21:22:32.272795+0000 osd.2 (osd.2) 4 : cluster [DBG] map e310 wrongly marked me down at e309 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:32 vm07 bash[20771]: cluster 2026-03-09T21:22:32.273162+0000 mon.a (mon.0) 1236 : cluster [INF] osd.2 marked itself dead as of e310 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:32 vm07 bash[20771]: cluster 2026-03-09T21:22:32.273162+0000 mon.a (mon.0) 1236 : cluster [INF] osd.2 marked itself dead as of e310 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:32 vm07 bash[20771]: cluster 2026-03-09T21:22:32.279626+0000 mon.a (mon.0) 1237 : cluster [INF] osd.5 marked itself dead as of e310 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:32 vm07 bash[20771]: cluster 2026-03-09T21:22:32.279626+0000 mon.a (mon.0) 1237 : cluster [INF] osd.5 marked itself dead as of e310 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:32 vm07 bash[28052]: cluster 2026-03-09T21:22:31.412695+0000 osd.7 (osd.7) 5 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:32 vm07 bash[28052]: cluster 2026-03-09T21:22:31.412695+0000 osd.7 (osd.7) 5 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:32 vm07 bash[28052]: cluster 2026-03-09T21:22:31.412698+0000 osd.7 (osd.7) 6 : cluster [DBG] map e309 wrongly marked me down at e309 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:32 vm07 bash[28052]: cluster 2026-03-09T21:22:31.412698+0000 osd.7 (osd.7) 6 : cluster [DBG] map e309 wrongly marked me down at e309 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:32 vm07 bash[28052]: cluster 2026-03-09T21:22:31.538634+0000 mon.a (mon.0) 1235 : cluster [DBG] osdmap e310: 8 total, 5 up, 8 in 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:32 vm07 bash[28052]: cluster 2026-03-09T21:22:31.538634+0000 mon.a (mon.0) 1235 : cluster [DBG] osdmap e310: 8 total, 5 up, 8 in 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:32 vm07 bash[28052]: cluster 2026-03-09T21:22:31.791132+0000 mgr.y (mgr.24416) 246 : cluster [DBG] pgmap v422: 196 pgs: 16 stale+creating+peering, 61 stale+active+clean, 16 creating+peering, 103 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:32 vm07 bash[28052]: cluster 2026-03-09T21:22:31.791132+0000 mgr.y (mgr.24416) 246 : cluster [DBG] pgmap v422: 196 pgs: 16 stale+creating+peering, 61 stale+active+clean, 16 creating+peering, 103 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:32 vm07 bash[28052]: cluster 2026-03-09T21:22:32.272793+0000 osd.2 (osd.2) 3 : cluster [WRN] Monitor daemon marked osd.2 down, but it is still running 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:32 vm07 bash[28052]: cluster 2026-03-09T21:22:32.272793+0000 osd.2 (osd.2) 3 : cluster [WRN] Monitor daemon marked osd.2 down, but it is still running 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:32 vm07 bash[28052]: cluster 2026-03-09T21:22:32.272795+0000 osd.2 (osd.2) 4 : cluster [DBG] map e310 wrongly marked me down at e309 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:32 vm07 bash[28052]: cluster 2026-03-09T21:22:32.272795+0000 osd.2 (osd.2) 4 : cluster [DBG] map e310 wrongly marked me down at e309 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:32 vm07 bash[28052]: cluster 2026-03-09T21:22:32.273162+0000 mon.a (mon.0) 1236 : cluster [INF] osd.2 marked itself dead as of e310 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:32 vm07 bash[28052]: cluster 2026-03-09T21:22:32.273162+0000 mon.a (mon.0) 1236 : cluster [INF] osd.2 marked itself dead as of e310 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:32 vm07 bash[28052]: cluster 2026-03-09T21:22:32.279626+0000 mon.a (mon.0) 1237 : cluster [INF] osd.5 marked itself dead as of e310 2026-03-09T21:22:32.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:32 vm07 bash[28052]: cluster 2026-03-09T21:22:32.279626+0000 mon.a (mon.0) 1237 : cluster [INF] osd.5 marked itself dead as of e310 2026-03-09T21:22:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:32 vm10 bash[23387]: cluster 2026-03-09T21:22:31.412695+0000 osd.7 (osd.7) 5 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-09T21:22:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:32 vm10 bash[23387]: cluster 2026-03-09T21:22:31.412695+0000 osd.7 (osd.7) 5 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-09T21:22:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:32 vm10 bash[23387]: cluster 2026-03-09T21:22:31.412698+0000 osd.7 (osd.7) 6 : cluster [DBG] map e309 wrongly marked me down at e309 2026-03-09T21:22:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:32 vm10 bash[23387]: cluster 2026-03-09T21:22:31.412698+0000 osd.7 (osd.7) 6 : cluster [DBG] map e309 wrongly marked me down at e309 2026-03-09T21:22:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:32 vm10 bash[23387]: cluster 2026-03-09T21:22:31.538634+0000 mon.a (mon.0) 1235 : cluster [DBG] osdmap e310: 8 total, 5 up, 8 in 2026-03-09T21:22:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:32 vm10 bash[23387]: cluster 2026-03-09T21:22:31.538634+0000 mon.a (mon.0) 1235 : cluster [DBG] osdmap e310: 8 total, 5 up, 8 in 2026-03-09T21:22:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:32 vm10 bash[23387]: cluster 2026-03-09T21:22:31.791132+0000 mgr.y (mgr.24416) 246 : cluster [DBG] pgmap v422: 196 pgs: 16 stale+creating+peering, 61 stale+active+clean, 16 creating+peering, 103 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:32 vm10 bash[23387]: cluster 2026-03-09T21:22:31.791132+0000 mgr.y (mgr.24416) 246 : cluster [DBG] pgmap v422: 196 pgs: 16 stale+creating+peering, 61 stale+active+clean, 16 creating+peering, 103 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:32 vm10 bash[23387]: cluster 2026-03-09T21:22:32.272793+0000 osd.2 (osd.2) 3 : cluster [WRN] Monitor daemon marked osd.2 down, but it is still running 2026-03-09T21:22:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:32 vm10 bash[23387]: cluster 2026-03-09T21:22:32.272793+0000 osd.2 (osd.2) 3 : cluster [WRN] Monitor daemon marked osd.2 down, but it is still running 2026-03-09T21:22:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:32 vm10 bash[23387]: cluster 2026-03-09T21:22:32.272795+0000 osd.2 (osd.2) 4 : cluster [DBG] map e310 wrongly marked me down at e309 2026-03-09T21:22:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:32 vm10 bash[23387]: cluster 2026-03-09T21:22:32.272795+0000 osd.2 (osd.2) 4 : cluster [DBG] map e310 wrongly marked me down at e309 2026-03-09T21:22:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:32 vm10 bash[23387]: cluster 2026-03-09T21:22:32.273162+0000 mon.a (mon.0) 1236 : cluster [INF] osd.2 marked itself dead as of e310 2026-03-09T21:22:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:32 vm10 bash[23387]: cluster 2026-03-09T21:22:32.273162+0000 mon.a (mon.0) 1236 : cluster [INF] osd.2 marked itself dead as of e310 2026-03-09T21:22:32.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:32 vm10 bash[23387]: cluster 2026-03-09T21:22:32.279626+0000 mon.a (mon.0) 1237 : cluster [INF] osd.5 marked itself dead as of e310 2026-03-09T21:22:32.943 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:32 vm10 bash[23387]: cluster 2026-03-09T21:22:32.279626+0000 mon.a (mon.0) 1237 : cluster [INF] osd.5 marked itself dead as of e310 2026-03-09T21:22:32.943 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:22:32 vm10 bash[44771]: debug 2026-03-09T21:22:32.617+0000 7fa1de6b2640 -1 osd.7 311 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T21:22:33.621 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:22:33 vm10 bash[32520]: debug 2026-03-09T21:22:33.193+0000 7fc21b7b6640 -1 osd.5 311 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T21:22:33.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:33 vm10 bash[23387]: cluster 2026-03-09T21:22:32.278067+0000 osd.5 (osd.5) 3 : cluster [WRN] Monitor daemon marked osd.5 down, but it is still running 2026-03-09T21:22:33.958 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:33 vm10 bash[23387]: cluster 2026-03-09T21:22:32.278067+0000 osd.5 (osd.5) 3 : cluster [WRN] Monitor daemon marked osd.5 down, but it is still running 2026-03-09T21:22:33.958 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:33 vm10 bash[23387]: cluster 2026-03-09T21:22:32.278068+0000 osd.5 (osd.5) 4 : cluster [DBG] map e310 wrongly marked me down at e309 2026-03-09T21:22:33.958 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:33 vm10 bash[23387]: cluster 2026-03-09T21:22:32.278068+0000 osd.5 (osd.5) 4 : cluster [DBG] map e310 wrongly marked me down at e309 2026-03-09T21:22:33.958 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:33 vm10 bash[23387]: cluster 2026-03-09T21:22:32.625359+0000 mon.a (mon.0) 1238 : cluster [DBG] osdmap e311: 8 total, 5 up, 8 in 2026-03-09T21:22:33.958 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:33 vm10 bash[23387]: cluster 2026-03-09T21:22:32.625359+0000 mon.a (mon.0) 1238 : cluster [DBG] osdmap e311: 8 total, 5 up, 8 in 2026-03-09T21:22:33.958 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:33 vm10 bash[23387]: audit 2026-03-09T21:22:33.454699+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.107:0/4168870384' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:33.958 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:33 vm10 bash[23387]: audit 2026-03-09T21:22:33.454699+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.107:0/4168870384' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:33.958 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:33 vm10 bash[23387]: audit 2026-03-09T21:22:33.455486+0000 mon.a (mon.0) 1239 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:33.958 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:33 vm10 bash[23387]: audit 2026-03-09T21:22:33.455486+0000 mon.a (mon.0) 1239 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:33.958 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:22:33 vm10 bash[32520]: debug 2026-03-09T21:22:33.645+0000 7fc2175e0640 -1 osd.5 312 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T21:22:33.958 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:22:33 vm10 bash[44771]: debug 2026-03-09T21:22:33.641+0000 7fa1e68f2640 -1 osd.7 312 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T21:22:34.114 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 21:22:33 vm07 bash[42797]: debug 2026-03-09T21:22:33.645+0000 7f2904cce640 -1 osd.2 312 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T21:22:34.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:33 vm07 bash[20771]: cluster 2026-03-09T21:22:32.278067+0000 osd.5 (osd.5) 3 : cluster [WRN] Monitor daemon marked osd.5 down, but it is still running 2026-03-09T21:22:34.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:33 vm07 bash[20771]: cluster 2026-03-09T21:22:32.278067+0000 osd.5 (osd.5) 3 : cluster [WRN] Monitor daemon marked osd.5 down, but it is still running 2026-03-09T21:22:34.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:33 vm07 bash[20771]: cluster 2026-03-09T21:22:32.278068+0000 osd.5 (osd.5) 4 : cluster [DBG] map e310 wrongly marked me down at e309 2026-03-09T21:22:34.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:33 vm07 bash[20771]: cluster 2026-03-09T21:22:32.278068+0000 osd.5 (osd.5) 4 : cluster [DBG] map e310 wrongly marked me down at e309 2026-03-09T21:22:34.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:33 vm07 bash[20771]: cluster 2026-03-09T21:22:32.625359+0000 mon.a (mon.0) 1238 : cluster [DBG] osdmap e311: 8 total, 5 up, 8 in 2026-03-09T21:22:34.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:33 vm07 bash[20771]: cluster 2026-03-09T21:22:32.625359+0000 mon.a (mon.0) 1238 : cluster [DBG] osdmap e311: 8 total, 5 up, 8 in 2026-03-09T21:22:34.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:33 vm07 bash[20771]: audit 2026-03-09T21:22:33.454699+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.107:0/4168870384' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:34.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:33 vm07 bash[20771]: audit 2026-03-09T21:22:33.454699+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.107:0/4168870384' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:34.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:33 vm07 bash[20771]: audit 2026-03-09T21:22:33.455486+0000 mon.a (mon.0) 1239 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:34.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:33 vm07 bash[20771]: audit 2026-03-09T21:22:33.455486+0000 mon.a (mon.0) 1239 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:34.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:33 vm07 bash[28052]: cluster 2026-03-09T21:22:32.278067+0000 osd.5 (osd.5) 3 : cluster [WRN] Monitor daemon marked osd.5 down, but it is still running 2026-03-09T21:22:34.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:33 vm07 bash[28052]: cluster 2026-03-09T21:22:32.278067+0000 osd.5 (osd.5) 3 : cluster [WRN] Monitor daemon marked osd.5 down, but it is still running 2026-03-09T21:22:34.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:33 vm07 bash[28052]: cluster 2026-03-09T21:22:32.278068+0000 osd.5 (osd.5) 4 : cluster [DBG] map e310 wrongly marked me down at e309 2026-03-09T21:22:34.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:33 vm07 bash[28052]: cluster 2026-03-09T21:22:32.278068+0000 osd.5 (osd.5) 4 : cluster [DBG] map e310 wrongly marked me down at e309 2026-03-09T21:22:34.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:33 vm07 bash[28052]: cluster 2026-03-09T21:22:32.625359+0000 mon.a (mon.0) 1238 : cluster [DBG] osdmap e311: 8 total, 5 up, 8 in 2026-03-09T21:22:34.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:33 vm07 bash[28052]: cluster 2026-03-09T21:22:32.625359+0000 mon.a (mon.0) 1238 : cluster [DBG] osdmap e311: 8 total, 5 up, 8 in 2026-03-09T21:22:34.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:33 vm07 bash[28052]: audit 2026-03-09T21:22:33.454699+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.107:0/4168870384' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:34.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:33 vm07 bash[28052]: audit 2026-03-09T21:22:33.454699+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.107:0/4168870384' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:34.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:33 vm07 bash[28052]: audit 2026-03-09T21:22:33.455486+0000 mon.a (mon.0) 1239 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:34.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:33 vm07 bash[28052]: audit 2026-03-09T21:22:33.455486+0000 mon.a (mon.0) 1239 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:35.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:34 vm07 bash[20771]: cluster 2026-03-09T21:22:33.617879+0000 mon.a (mon.0) 1240 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T21:22:35.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:34 vm07 bash[20771]: cluster 2026-03-09T21:22:33.617879+0000 mon.a (mon.0) 1240 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T21:22:35.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:34 vm07 bash[20771]: audit 2026-03-09T21:22:33.638765+0000 mon.a (mon.0) 1241 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:35.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:34 vm07 bash[20771]: audit 2026-03-09T21:22:33.638765+0000 mon.a (mon.0) 1241 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:35.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:34 vm07 bash[20771]: cluster 2026-03-09T21:22:33.646310+0000 mon.a (mon.0) 1242 : cluster [DBG] osdmap e312: 8 total, 5 up, 8 in 2026-03-09T21:22:35.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:34 vm07 bash[20771]: cluster 2026-03-09T21:22:33.646310+0000 mon.a (mon.0) 1242 : cluster [DBG] osdmap e312: 8 total, 5 up, 8 in 2026-03-09T21:22:35.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:34 vm07 bash[20771]: cluster 2026-03-09T21:22:33.791757+0000 mgr.y (mgr.24416) 247 : cluster [DBG] pgmap v425: 196 pgs: 5 active+undersized+degraded+wait, 4 undersized+degraded+peered+wait, 21 undersized+peered, 66 active+undersized, 2 stale+creating+peering, 7 stale+active+clean, 5 undersized+degraded+peered, 11 undersized+peered+wait, 18 active+undersized+wait, 25 active+undersized+degraded, 32 active+clean; 455 KiB data, 447 MiB used, 160 GiB / 160 GiB avail; 196/597 objects degraded (32.831%) 2026-03-09T21:22:35.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:34 vm07 bash[20771]: cluster 2026-03-09T21:22:33.791757+0000 mgr.y (mgr.24416) 247 : cluster [DBG] pgmap v425: 196 pgs: 5 active+undersized+degraded+wait, 4 undersized+degraded+peered+wait, 21 undersized+peered, 66 active+undersized, 2 stale+creating+peering, 7 stale+active+clean, 5 undersized+degraded+peered, 11 undersized+peered+wait, 18 active+undersized+wait, 25 active+undersized+degraded, 32 active+clean; 455 KiB data, 447 MiB used, 160 GiB / 160 GiB avail; 196/597 objects degraded (32.831%) 2026-03-09T21:22:35.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:34 vm07 bash[28052]: cluster 2026-03-09T21:22:33.617879+0000 mon.a (mon.0) 1240 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T21:22:35.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:34 vm07 bash[28052]: cluster 2026-03-09T21:22:33.617879+0000 mon.a (mon.0) 1240 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T21:22:35.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:34 vm07 bash[28052]: audit 2026-03-09T21:22:33.638765+0000 mon.a (mon.0) 1241 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:35.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:34 vm07 bash[28052]: audit 2026-03-09T21:22:33.638765+0000 mon.a (mon.0) 1241 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:35.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:34 vm07 bash[28052]: cluster 2026-03-09T21:22:33.646310+0000 mon.a (mon.0) 1242 : cluster [DBG] osdmap e312: 8 total, 5 up, 8 in 2026-03-09T21:22:35.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:34 vm07 bash[28052]: cluster 2026-03-09T21:22:33.646310+0000 mon.a (mon.0) 1242 : cluster [DBG] osdmap e312: 8 total, 5 up, 8 in 2026-03-09T21:22:35.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:34 vm07 bash[28052]: cluster 2026-03-09T21:22:33.791757+0000 mgr.y (mgr.24416) 247 : cluster [DBG] pgmap v425: 196 pgs: 5 active+undersized+degraded+wait, 4 undersized+degraded+peered+wait, 21 undersized+peered, 66 active+undersized, 2 stale+creating+peering, 7 stale+active+clean, 5 undersized+degraded+peered, 11 undersized+peered+wait, 18 active+undersized+wait, 25 active+undersized+degraded, 32 active+clean; 455 KiB data, 447 MiB used, 160 GiB / 160 GiB avail; 196/597 objects degraded (32.831%) 2026-03-09T21:22:35.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:34 vm07 bash[28052]: cluster 2026-03-09T21:22:33.791757+0000 mgr.y (mgr.24416) 247 : cluster [DBG] pgmap v425: 196 pgs: 5 active+undersized+degraded+wait, 4 undersized+degraded+peered+wait, 21 undersized+peered, 66 active+undersized, 2 stale+creating+peering, 7 stale+active+clean, 5 undersized+degraded+peered, 11 undersized+peered+wait, 18 active+undersized+wait, 25 active+undersized+degraded, 32 active+clean; 455 KiB data, 447 MiB used, 160 GiB / 160 GiB avail; 196/597 objects degraded (32.831%) 2026-03-09T21:22:35.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:34 vm10 bash[23387]: cluster 2026-03-09T21:22:33.617879+0000 mon.a (mon.0) 1240 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T21:22:35.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:34 vm10 bash[23387]: cluster 2026-03-09T21:22:33.617879+0000 mon.a (mon.0) 1240 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T21:22:35.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:34 vm10 bash[23387]: audit 2026-03-09T21:22:33.638765+0000 mon.a (mon.0) 1241 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:35.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:34 vm10 bash[23387]: audit 2026-03-09T21:22:33.638765+0000 mon.a (mon.0) 1241 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:35.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:34 vm10 bash[23387]: cluster 2026-03-09T21:22:33.646310+0000 mon.a (mon.0) 1242 : cluster [DBG] osdmap e312: 8 total, 5 up, 8 in 2026-03-09T21:22:35.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:34 vm10 bash[23387]: cluster 2026-03-09T21:22:33.646310+0000 mon.a (mon.0) 1242 : cluster [DBG] osdmap e312: 8 total, 5 up, 8 in 2026-03-09T21:22:35.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:34 vm10 bash[23387]: cluster 2026-03-09T21:22:33.791757+0000 mgr.y (mgr.24416) 247 : cluster [DBG] pgmap v425: 196 pgs: 5 active+undersized+degraded+wait, 4 undersized+degraded+peered+wait, 21 undersized+peered, 66 active+undersized, 2 stale+creating+peering, 7 stale+active+clean, 5 undersized+degraded+peered, 11 undersized+peered+wait, 18 active+undersized+wait, 25 active+undersized+degraded, 32 active+clean; 455 KiB data, 447 MiB used, 160 GiB / 160 GiB avail; 196/597 objects degraded (32.831%) 2026-03-09T21:22:35.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:34 vm10 bash[23387]: cluster 2026-03-09T21:22:33.791757+0000 mgr.y (mgr.24416) 247 : cluster [DBG] pgmap v425: 196 pgs: 5 active+undersized+degraded+wait, 4 undersized+degraded+peered+wait, 21 undersized+peered, 66 active+undersized, 2 stale+creating+peering, 7 stale+active+clean, 5 undersized+degraded+peered, 11 undersized+peered+wait, 18 active+undersized+wait, 25 active+undersized+degraded, 32 active+clean; 455 KiB data, 447 MiB used, 160 GiB / 160 GiB avail; 196/597 objects degraded (32.831%) 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:35 vm07 bash[20771]: cluster 2026-03-09T21:22:34.639384+0000 mon.a (mon.0) 1243 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:35 vm07 bash[20771]: cluster 2026-03-09T21:22:34.639384+0000 mon.a (mon.0) 1243 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:35 vm07 bash[20771]: cluster 2026-03-09T21:22:34.639465+0000 mon.a (mon.0) 1244 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive (PG_AVAILABILITY) 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:35 vm07 bash[20771]: cluster 2026-03-09T21:22:34.639465+0000 mon.a (mon.0) 1244 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive (PG_AVAILABILITY) 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:35 vm07 bash[20771]: cluster 2026-03-09T21:22:34.639479+0000 mon.a (mon.0) 1245 : cluster [WRN] Health check failed: Degraded data redundancy: 196/597 objects degraded (32.831%), 39 pgs degraded (PG_DEGRADED) 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:35 vm07 bash[20771]: cluster 2026-03-09T21:22:34.639479+0000 mon.a (mon.0) 1245 : cluster [WRN] Health check failed: Degraded data redundancy: 196/597 objects degraded (32.831%), 39 pgs degraded (PG_DEGRADED) 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:35 vm07 bash[20771]: cluster 2026-03-09T21:22:34.746397+0000 mon.a (mon.0) 1246 : cluster [INF] osd.2 v2:192.168.123.107:6809/2553486713 boot 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:35 vm07 bash[20771]: cluster 2026-03-09T21:22:34.746397+0000 mon.a (mon.0) 1246 : cluster [INF] osd.2 v2:192.168.123.107:6809/2553486713 boot 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:35 vm07 bash[20771]: cluster 2026-03-09T21:22:34.746504+0000 mon.a (mon.0) 1247 : cluster [INF] osd.7 v2:192.168.123.110:6812/2049527874 boot 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:35 vm07 bash[20771]: cluster 2026-03-09T21:22:34.746504+0000 mon.a (mon.0) 1247 : cluster [INF] osd.7 v2:192.168.123.110:6812/2049527874 boot 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:35 vm07 bash[20771]: cluster 2026-03-09T21:22:34.746738+0000 mon.a (mon.0) 1248 : cluster [INF] osd.5 v2:192.168.123.110:6804/1216077544 boot 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:35 vm07 bash[20771]: cluster 2026-03-09T21:22:34.746738+0000 mon.a (mon.0) 1248 : cluster [INF] osd.5 v2:192.168.123.110:6804/1216077544 boot 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:35 vm07 bash[20771]: cluster 2026-03-09T21:22:34.746874+0000 mon.a (mon.0) 1249 : cluster [DBG] osdmap e313: 8 total, 8 up, 8 in 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:35 vm07 bash[20771]: cluster 2026-03-09T21:22:34.746874+0000 mon.a (mon.0) 1249 : cluster [DBG] osdmap e313: 8 total, 8 up, 8 in 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:35 vm07 bash[20771]: audit 2026-03-09T21:22:34.766341+0000 mon.c (mon.2) 116 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:35 vm07 bash[20771]: audit 2026-03-09T21:22:34.766341+0000 mon.c (mon.2) 116 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:35 vm07 bash[20771]: audit 2026-03-09T21:22:34.774427+0000 mon.c (mon.2) 117 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:35 vm07 bash[20771]: audit 2026-03-09T21:22:34.774427+0000 mon.c (mon.2) 117 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:35 vm07 bash[20771]: audit 2026-03-09T21:22:34.774928+0000 mon.c (mon.2) 118 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:35 vm07 bash[20771]: audit 2026-03-09T21:22:34.774928+0000 mon.c (mon.2) 118 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:35 vm07 bash[28052]: cluster 2026-03-09T21:22:34.639384+0000 mon.a (mon.0) 1243 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:35 vm07 bash[28052]: cluster 2026-03-09T21:22:34.639384+0000 mon.a (mon.0) 1243 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:35 vm07 bash[28052]: cluster 2026-03-09T21:22:34.639465+0000 mon.a (mon.0) 1244 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive (PG_AVAILABILITY) 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:35 vm07 bash[28052]: cluster 2026-03-09T21:22:34.639465+0000 mon.a (mon.0) 1244 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive (PG_AVAILABILITY) 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:35 vm07 bash[28052]: cluster 2026-03-09T21:22:34.639479+0000 mon.a (mon.0) 1245 : cluster [WRN] Health check failed: Degraded data redundancy: 196/597 objects degraded (32.831%), 39 pgs degraded (PG_DEGRADED) 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:35 vm07 bash[28052]: cluster 2026-03-09T21:22:34.639479+0000 mon.a (mon.0) 1245 : cluster [WRN] Health check failed: Degraded data redundancy: 196/597 objects degraded (32.831%), 39 pgs degraded (PG_DEGRADED) 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:35 vm07 bash[28052]: cluster 2026-03-09T21:22:34.746397+0000 mon.a (mon.0) 1246 : cluster [INF] osd.2 v2:192.168.123.107:6809/2553486713 boot 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:35 vm07 bash[28052]: cluster 2026-03-09T21:22:34.746397+0000 mon.a (mon.0) 1246 : cluster [INF] osd.2 v2:192.168.123.107:6809/2553486713 boot 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:35 vm07 bash[28052]: cluster 2026-03-09T21:22:34.746504+0000 mon.a (mon.0) 1247 : cluster [INF] osd.7 v2:192.168.123.110:6812/2049527874 boot 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:35 vm07 bash[28052]: cluster 2026-03-09T21:22:34.746504+0000 mon.a (mon.0) 1247 : cluster [INF] osd.7 v2:192.168.123.110:6812/2049527874 boot 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:35 vm07 bash[28052]: cluster 2026-03-09T21:22:34.746738+0000 mon.a (mon.0) 1248 : cluster [INF] osd.5 v2:192.168.123.110:6804/1216077544 boot 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:35 vm07 bash[28052]: cluster 2026-03-09T21:22:34.746738+0000 mon.a (mon.0) 1248 : cluster [INF] osd.5 v2:192.168.123.110:6804/1216077544 boot 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:35 vm07 bash[28052]: cluster 2026-03-09T21:22:34.746874+0000 mon.a (mon.0) 1249 : cluster [DBG] osdmap e313: 8 total, 8 up, 8 in 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:35 vm07 bash[28052]: cluster 2026-03-09T21:22:34.746874+0000 mon.a (mon.0) 1249 : cluster [DBG] osdmap e313: 8 total, 8 up, 8 in 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:35 vm07 bash[28052]: audit 2026-03-09T21:22:34.766341+0000 mon.c (mon.2) 116 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:35 vm07 bash[28052]: audit 2026-03-09T21:22:34.766341+0000 mon.c (mon.2) 116 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:35 vm07 bash[28052]: audit 2026-03-09T21:22:34.774427+0000 mon.c (mon.2) 117 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:35 vm07 bash[28052]: audit 2026-03-09T21:22:34.774427+0000 mon.c (mon.2) 117 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:35 vm07 bash[28052]: audit 2026-03-09T21:22:34.774928+0000 mon.c (mon.2) 118 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:22:36.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:35 vm07 bash[28052]: audit 2026-03-09T21:22:34.774928+0000 mon.c (mon.2) 118 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:22:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:35 vm10 bash[23387]: cluster 2026-03-09T21:22:34.639384+0000 mon.a (mon.0) 1243 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T21:22:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:35 vm10 bash[23387]: cluster 2026-03-09T21:22:34.639384+0000 mon.a (mon.0) 1243 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T21:22:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:35 vm10 bash[23387]: cluster 2026-03-09T21:22:34.639465+0000 mon.a (mon.0) 1244 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive (PG_AVAILABILITY) 2026-03-09T21:22:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:35 vm10 bash[23387]: cluster 2026-03-09T21:22:34.639465+0000 mon.a (mon.0) 1244 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive (PG_AVAILABILITY) 2026-03-09T21:22:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:35 vm10 bash[23387]: cluster 2026-03-09T21:22:34.639479+0000 mon.a (mon.0) 1245 : cluster [WRN] Health check failed: Degraded data redundancy: 196/597 objects degraded (32.831%), 39 pgs degraded (PG_DEGRADED) 2026-03-09T21:22:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:35 vm10 bash[23387]: cluster 2026-03-09T21:22:34.639479+0000 mon.a (mon.0) 1245 : cluster [WRN] Health check failed: Degraded data redundancy: 196/597 objects degraded (32.831%), 39 pgs degraded (PG_DEGRADED) 2026-03-09T21:22:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:35 vm10 bash[23387]: cluster 2026-03-09T21:22:34.746397+0000 mon.a (mon.0) 1246 : cluster [INF] osd.2 v2:192.168.123.107:6809/2553486713 boot 2026-03-09T21:22:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:35 vm10 bash[23387]: cluster 2026-03-09T21:22:34.746397+0000 mon.a (mon.0) 1246 : cluster [INF] osd.2 v2:192.168.123.107:6809/2553486713 boot 2026-03-09T21:22:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:35 vm10 bash[23387]: cluster 2026-03-09T21:22:34.746504+0000 mon.a (mon.0) 1247 : cluster [INF] osd.7 v2:192.168.123.110:6812/2049527874 boot 2026-03-09T21:22:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:35 vm10 bash[23387]: cluster 2026-03-09T21:22:34.746504+0000 mon.a (mon.0) 1247 : cluster [INF] osd.7 v2:192.168.123.110:6812/2049527874 boot 2026-03-09T21:22:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:35 vm10 bash[23387]: cluster 2026-03-09T21:22:34.746738+0000 mon.a (mon.0) 1248 : cluster [INF] osd.5 v2:192.168.123.110:6804/1216077544 boot 2026-03-09T21:22:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:35 vm10 bash[23387]: cluster 2026-03-09T21:22:34.746738+0000 mon.a (mon.0) 1248 : cluster [INF] osd.5 v2:192.168.123.110:6804/1216077544 boot 2026-03-09T21:22:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:35 vm10 bash[23387]: cluster 2026-03-09T21:22:34.746874+0000 mon.a (mon.0) 1249 : cluster [DBG] osdmap e313: 8 total, 8 up, 8 in 2026-03-09T21:22:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:35 vm10 bash[23387]: cluster 2026-03-09T21:22:34.746874+0000 mon.a (mon.0) 1249 : cluster [DBG] osdmap e313: 8 total, 8 up, 8 in 2026-03-09T21:22:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:35 vm10 bash[23387]: audit 2026-03-09T21:22:34.766341+0000 mon.c (mon.2) 116 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:22:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:35 vm10 bash[23387]: audit 2026-03-09T21:22:34.766341+0000 mon.c (mon.2) 116 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:22:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:35 vm10 bash[23387]: audit 2026-03-09T21:22:34.774427+0000 mon.c (mon.2) 117 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:22:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:35 vm10 bash[23387]: audit 2026-03-09T21:22:34.774427+0000 mon.c (mon.2) 117 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T21:22:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:35 vm10 bash[23387]: audit 2026-03-09T21:22:34.774928+0000 mon.c (mon.2) 118 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:22:36.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:35 vm10 bash[23387]: audit 2026-03-09T21:22:34.774928+0000 mon.c (mon.2) 118 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:22:36.942 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:22:36 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:22:37.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:36 vm07 bash[20771]: cluster 2026-03-09T21:22:35.792271+0000 mgr.y (mgr.24416) 248 : cluster [DBG] pgmap v427: 196 pgs: 5 active+undersized+degraded+wait, 4 undersized+degraded+peered+wait, 21 undersized+peered, 66 active+undersized, 2 stale+creating+peering, 7 stale+active+clean, 5 undersized+degraded+peered, 11 undersized+peered+wait, 18 active+undersized+wait, 25 active+undersized+degraded, 32 active+clean; 455 KiB data, 447 MiB used, 160 GiB / 160 GiB avail; 196/597 objects degraded (32.831%) 2026-03-09T21:22:37.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:36 vm07 bash[20771]: cluster 2026-03-09T21:22:35.792271+0000 mgr.y (mgr.24416) 248 : cluster [DBG] pgmap v427: 196 pgs: 5 active+undersized+degraded+wait, 4 undersized+degraded+peered+wait, 21 undersized+peered, 66 active+undersized, 2 stale+creating+peering, 7 stale+active+clean, 5 undersized+degraded+peered, 11 undersized+peered+wait, 18 active+undersized+wait, 25 active+undersized+degraded, 32 active+clean; 455 KiB data, 447 MiB used, 160 GiB / 160 GiB avail; 196/597 objects degraded (32.831%) 2026-03-09T21:22:37.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:36 vm07 bash[20771]: cluster 2026-03-09T21:22:35.827639+0000 mon.a (mon.0) 1250 : cluster [DBG] osdmap e314: 8 total, 8 up, 8 in 2026-03-09T21:22:37.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:36 vm07 bash[20771]: cluster 2026-03-09T21:22:35.827639+0000 mon.a (mon.0) 1250 : cluster [DBG] osdmap e314: 8 total, 8 up, 8 in 2026-03-09T21:22:37.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:36 vm07 bash[20771]: audit 2026-03-09T21:22:36.388504+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.107:0/4168870384' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:37.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:36 vm07 bash[20771]: audit 2026-03-09T21:22:36.388504+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.107:0/4168870384' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:37.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:36 vm07 bash[20771]: audit 2026-03-09T21:22:36.389195+0000 mon.a (mon.0) 1251 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:37.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:36 vm07 bash[20771]: audit 2026-03-09T21:22:36.389195+0000 mon.a (mon.0) 1251 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:37.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:36 vm07 bash[28052]: cluster 2026-03-09T21:22:35.792271+0000 mgr.y (mgr.24416) 248 : cluster [DBG] pgmap v427: 196 pgs: 5 active+undersized+degraded+wait, 4 undersized+degraded+peered+wait, 21 undersized+peered, 66 active+undersized, 2 stale+creating+peering, 7 stale+active+clean, 5 undersized+degraded+peered, 11 undersized+peered+wait, 18 active+undersized+wait, 25 active+undersized+degraded, 32 active+clean; 455 KiB data, 447 MiB used, 160 GiB / 160 GiB avail; 196/597 objects degraded (32.831%) 2026-03-09T21:22:37.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:36 vm07 bash[28052]: cluster 2026-03-09T21:22:35.792271+0000 mgr.y (mgr.24416) 248 : cluster [DBG] pgmap v427: 196 pgs: 5 active+undersized+degraded+wait, 4 undersized+degraded+peered+wait, 21 undersized+peered, 66 active+undersized, 2 stale+creating+peering, 7 stale+active+clean, 5 undersized+degraded+peered, 11 undersized+peered+wait, 18 active+undersized+wait, 25 active+undersized+degraded, 32 active+clean; 455 KiB data, 447 MiB used, 160 GiB / 160 GiB avail; 196/597 objects degraded (32.831%) 2026-03-09T21:22:37.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:36 vm07 bash[28052]: cluster 2026-03-09T21:22:35.827639+0000 mon.a (mon.0) 1250 : cluster [DBG] osdmap e314: 8 total, 8 up, 8 in 2026-03-09T21:22:37.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:36 vm07 bash[28052]: cluster 2026-03-09T21:22:35.827639+0000 mon.a (mon.0) 1250 : cluster [DBG] osdmap e314: 8 total, 8 up, 8 in 2026-03-09T21:22:37.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:36 vm07 bash[28052]: audit 2026-03-09T21:22:36.388504+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.107:0/4168870384' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:37.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:36 vm07 bash[28052]: audit 2026-03-09T21:22:36.388504+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.107:0/4168870384' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:37.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:36 vm07 bash[28052]: audit 2026-03-09T21:22:36.389195+0000 mon.a (mon.0) 1251 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:37.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:36 vm07 bash[28052]: audit 2026-03-09T21:22:36.389195+0000 mon.a (mon.0) 1251 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:37.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:36 vm10 bash[23387]: cluster 2026-03-09T21:22:35.792271+0000 mgr.y (mgr.24416) 248 : cluster [DBG] pgmap v427: 196 pgs: 5 active+undersized+degraded+wait, 4 undersized+degraded+peered+wait, 21 undersized+peered, 66 active+undersized, 2 stale+creating+peering, 7 stale+active+clean, 5 undersized+degraded+peered, 11 undersized+peered+wait, 18 active+undersized+wait, 25 active+undersized+degraded, 32 active+clean; 455 KiB data, 447 MiB used, 160 GiB / 160 GiB avail; 196/597 objects degraded (32.831%) 2026-03-09T21:22:37.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:36 vm10 bash[23387]: cluster 2026-03-09T21:22:35.792271+0000 mgr.y (mgr.24416) 248 : cluster [DBG] pgmap v427: 196 pgs: 5 active+undersized+degraded+wait, 4 undersized+degraded+peered+wait, 21 undersized+peered, 66 active+undersized, 2 stale+creating+peering, 7 stale+active+clean, 5 undersized+degraded+peered, 11 undersized+peered+wait, 18 active+undersized+wait, 25 active+undersized+degraded, 32 active+clean; 455 KiB data, 447 MiB used, 160 GiB / 160 GiB avail; 196/597 objects degraded (32.831%) 2026-03-09T21:22:37.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:36 vm10 bash[23387]: cluster 2026-03-09T21:22:35.827639+0000 mon.a (mon.0) 1250 : cluster [DBG] osdmap e314: 8 total, 8 up, 8 in 2026-03-09T21:22:37.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:36 vm10 bash[23387]: cluster 2026-03-09T21:22:35.827639+0000 mon.a (mon.0) 1250 : cluster [DBG] osdmap e314: 8 total, 8 up, 8 in 2026-03-09T21:22:37.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:36 vm10 bash[23387]: audit 2026-03-09T21:22:36.388504+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.107:0/4168870384' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:37.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:36 vm10 bash[23387]: audit 2026-03-09T21:22:36.388504+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.107:0/4168870384' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:37.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:36 vm10 bash[23387]: audit 2026-03-09T21:22:36.389195+0000 mon.a (mon.0) 1251 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:37.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:36 vm10 bash[23387]: audit 2026-03-09T21:22:36.389195+0000 mon.a (mon.0) 1251 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:37.981 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_read_wait_for_complete_and_cb PASSED [ 78%] 2026-03-09T21:22:38.364 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:37 vm07 bash[20771]: audit 2026-03-09T21:22:36.454480+0000 mgr.y (mgr.24416) 249 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:38.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:37 vm07 bash[20771]: audit 2026-03-09T21:22:36.454480+0000 mgr.y (mgr.24416) 249 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:38.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:37 vm07 bash[20771]: audit 2026-03-09T21:22:36.969272+0000 mon.a (mon.0) 1252 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:38.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:37 vm07 bash[20771]: audit 2026-03-09T21:22:36.969272+0000 mon.a (mon.0) 1252 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:38.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:37 vm07 bash[20771]: cluster 2026-03-09T21:22:36.980999+0000 mon.a (mon.0) 1253 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-09T21:22:38.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:37 vm07 bash[20771]: cluster 2026-03-09T21:22:36.980999+0000 mon.a (mon.0) 1253 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-09T21:22:38.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:38 vm07 bash[20771]: cluster 2026-03-09T21:22:37.792848+0000 mgr.y (mgr.24416) 250 : cluster [DBG] pgmap v430: 196 pgs: 196 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 4.1 KiB/s rd, 247 B/s wr, 4 op/s 2026-03-09T21:22:38.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:38 vm07 bash[20771]: cluster 2026-03-09T21:22:37.792848+0000 mgr.y (mgr.24416) 250 : cluster [DBG] pgmap v430: 196 pgs: 196 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 4.1 KiB/s rd, 247 B/s wr, 4 op/s 2026-03-09T21:22:38.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:37 vm07 bash[28052]: audit 2026-03-09T21:22:36.454480+0000 mgr.y (mgr.24416) 249 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:38.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:37 vm07 bash[28052]: audit 2026-03-09T21:22:36.454480+0000 mgr.y (mgr.24416) 249 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:38.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:37 vm07 bash[28052]: audit 2026-03-09T21:22:36.969272+0000 mon.a (mon.0) 1252 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:38.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:37 vm07 bash[28052]: audit 2026-03-09T21:22:36.969272+0000 mon.a (mon.0) 1252 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:38.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:37 vm07 bash[28052]: cluster 2026-03-09T21:22:36.980999+0000 mon.a (mon.0) 1253 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-09T21:22:38.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:37 vm07 bash[28052]: cluster 2026-03-09T21:22:36.980999+0000 mon.a (mon.0) 1253 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-09T21:22:38.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:37 vm07 bash[28052]: cluster 2026-03-09T21:22:37.792848+0000 mgr.y (mgr.24416) 250 : cluster [DBG] pgmap v430: 196 pgs: 196 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 4.1 KiB/s rd, 247 B/s wr, 4 op/s 2026-03-09T21:22:38.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:37 vm07 bash[28052]: cluster 2026-03-09T21:22:37.792848+0000 mgr.y (mgr.24416) 250 : cluster [DBG] pgmap v430: 196 pgs: 196 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 4.1 KiB/s rd, 247 B/s wr, 4 op/s 2026-03-09T21:22:38.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:37 vm10 bash[23387]: audit 2026-03-09T21:22:36.454480+0000 mgr.y (mgr.24416) 249 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:38.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:37 vm10 bash[23387]: audit 2026-03-09T21:22:36.454480+0000 mgr.y (mgr.24416) 249 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:38.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:37 vm10 bash[23387]: audit 2026-03-09T21:22:36.969272+0000 mon.a (mon.0) 1252 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:38.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:37 vm10 bash[23387]: audit 2026-03-09T21:22:36.969272+0000 mon.a (mon.0) 1252 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:38.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:37 vm10 bash[23387]: cluster 2026-03-09T21:22:36.980999+0000 mon.a (mon.0) 1253 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-09T21:22:38.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:37 vm10 bash[23387]: cluster 2026-03-09T21:22:36.980999+0000 mon.a (mon.0) 1253 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-09T21:22:38.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:37 vm10 bash[23387]: cluster 2026-03-09T21:22:37.792848+0000 mgr.y (mgr.24416) 250 : cluster [DBG] pgmap v430: 196 pgs: 196 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 4.1 KiB/s rd, 247 B/s wr, 4 op/s 2026-03-09T21:22:38.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:37 vm10 bash[23387]: cluster 2026-03-09T21:22:37.792848+0000 mgr.y (mgr.24416) 250 : cluster [DBG] pgmap v430: 196 pgs: 196 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 4.1 KiB/s rd, 247 B/s wr, 4 op/s 2026-03-09T21:22:39.020 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:22:38 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:22:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:22:39.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:39 vm07 bash[20771]: cluster 2026-03-09T21:22:37.970066+0000 mon.a (mon.0) 1254 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs inactive) 2026-03-09T21:22:39.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:39 vm07 bash[20771]: cluster 2026-03-09T21:22:37.970066+0000 mon.a (mon.0) 1254 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs inactive) 2026-03-09T21:22:39.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:39 vm07 bash[20771]: cluster 2026-03-09T21:22:37.970098+0000 mon.a (mon.0) 1255 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 196/597 objects degraded (32.831%), 39 pgs degraded) 2026-03-09T21:22:39.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:39 vm07 bash[20771]: cluster 2026-03-09T21:22:37.970098+0000 mon.a (mon.0) 1255 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 196/597 objects degraded (32.831%), 39 pgs degraded) 2026-03-09T21:22:39.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:39 vm07 bash[20771]: cluster 2026-03-09T21:22:37.977029+0000 mon.a (mon.0) 1256 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-09T21:22:39.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:39 vm07 bash[20771]: cluster 2026-03-09T21:22:37.977029+0000 mon.a (mon.0) 1256 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-09T21:22:39.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:39 vm07 bash[28052]: cluster 2026-03-09T21:22:37.970066+0000 mon.a (mon.0) 1254 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs inactive) 2026-03-09T21:22:39.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:39 vm07 bash[28052]: cluster 2026-03-09T21:22:37.970066+0000 mon.a (mon.0) 1254 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs inactive) 2026-03-09T21:22:39.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:39 vm07 bash[28052]: cluster 2026-03-09T21:22:37.970098+0000 mon.a (mon.0) 1255 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 196/597 objects degraded (32.831%), 39 pgs degraded) 2026-03-09T21:22:39.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:39 vm07 bash[28052]: cluster 2026-03-09T21:22:37.970098+0000 mon.a (mon.0) 1255 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 196/597 objects degraded (32.831%), 39 pgs degraded) 2026-03-09T21:22:39.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:39 vm07 bash[28052]: cluster 2026-03-09T21:22:37.977029+0000 mon.a (mon.0) 1256 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-09T21:22:39.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:39 vm07 bash[28052]: cluster 2026-03-09T21:22:37.977029+0000 mon.a (mon.0) 1256 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-09T21:22:39.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:39 vm10 bash[23387]: cluster 2026-03-09T21:22:37.970066+0000 mon.a (mon.0) 1254 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs inactive) 2026-03-09T21:22:39.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:39 vm10 bash[23387]: cluster 2026-03-09T21:22:37.970066+0000 mon.a (mon.0) 1254 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs inactive) 2026-03-09T21:22:39.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:39 vm10 bash[23387]: cluster 2026-03-09T21:22:37.970098+0000 mon.a (mon.0) 1255 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 196/597 objects degraded (32.831%), 39 pgs degraded) 2026-03-09T21:22:39.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:39 vm10 bash[23387]: cluster 2026-03-09T21:22:37.970098+0000 mon.a (mon.0) 1255 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 196/597 objects degraded (32.831%), 39 pgs degraded) 2026-03-09T21:22:39.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:39 vm10 bash[23387]: cluster 2026-03-09T21:22:37.977029+0000 mon.a (mon.0) 1256 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-09T21:22:39.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:39 vm10 bash[23387]: cluster 2026-03-09T21:22:37.977029+0000 mon.a (mon.0) 1256 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:40 vm07 bash[20771]: debug 2026-03-09T21:22:40.009+0000 7efed56c7640 -1 mon.a@0(leader).osd e318 definitely_dead 0 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:40 vm07 bash[20771]: cluster 2026-03-09T21:22:39.005144+0000 mon.a (mon.0) 1257 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:40 vm07 bash[20771]: cluster 2026-03-09T21:22:39.005144+0000 mon.a (mon.0) 1257 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:40 vm07 bash[20771]: audit 2026-03-09T21:22:39.007485+0000 mon.a (mon.0) 1258 : audit [DBG] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "bar", "format": "json"}]: dispatch 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:40 vm07 bash[20771]: audit 2026-03-09T21:22:39.007485+0000 mon.a (mon.0) 1258 : audit [DBG] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "bar", "format": "json"}]: dispatch 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:40 vm07 bash[20771]: audit 2026-03-09T21:22:39.007747+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:40 vm07 bash[20771]: audit 2026-03-09T21:22:39.007747+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:40 vm07 bash[20771]: cluster 2026-03-09T21:22:39.793144+0000 mgr.y (mgr.24416) 251 : cluster [DBG] pgmap v433: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 4.0 KiB/s rd, 3 op/s 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:40 vm07 bash[20771]: cluster 2026-03-09T21:22:39.793144+0000 mgr.y (mgr.24416) 251 : cluster [DBG] pgmap v433: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 4.0 KiB/s rd, 3 op/s 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:40 vm07 bash[20771]: cluster 2026-03-09T21:22:39.998049+0000 mon.a (mon.0) 1260 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:40 vm07 bash[20771]: cluster 2026-03-09T21:22:39.998049+0000 mon.a (mon.0) 1260 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:40 vm07 bash[20771]: audit 2026-03-09T21:22:40.002492+0000 mon.a (mon.0) 1261 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:40 vm07 bash[20771]: audit 2026-03-09T21:22:40.002492+0000 mon.a (mon.0) 1261 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:40 vm07 bash[20771]: cluster 2026-03-09T21:22:40.012082+0000 mon.a (mon.0) 1262 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:40 vm07 bash[20771]: cluster 2026-03-09T21:22:40.012082+0000 mon.a (mon.0) 1262 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:40 vm07 bash[20771]: audit 2026-03-09T21:22:40.013064+0000 mon.a (mon.0) 1263 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:40 vm07 bash[20771]: audit 2026-03-09T21:22:40.013064+0000 mon.a (mon.0) 1263 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:40 vm07 bash[28052]: cluster 2026-03-09T21:22:39.005144+0000 mon.a (mon.0) 1257 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:40 vm07 bash[28052]: cluster 2026-03-09T21:22:39.005144+0000 mon.a (mon.0) 1257 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:40 vm07 bash[28052]: audit 2026-03-09T21:22:39.007485+0000 mon.a (mon.0) 1258 : audit [DBG] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "bar", "format": "json"}]: dispatch 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:40 vm07 bash[28052]: audit 2026-03-09T21:22:39.007485+0000 mon.a (mon.0) 1258 : audit [DBG] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "bar", "format": "json"}]: dispatch 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:40 vm07 bash[28052]: audit 2026-03-09T21:22:39.007747+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:40 vm07 bash[28052]: audit 2026-03-09T21:22:39.007747+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:40 vm07 bash[28052]: cluster 2026-03-09T21:22:39.793144+0000 mgr.y (mgr.24416) 251 : cluster [DBG] pgmap v433: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 4.0 KiB/s rd, 3 op/s 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:40 vm07 bash[28052]: cluster 2026-03-09T21:22:39.793144+0000 mgr.y (mgr.24416) 251 : cluster [DBG] pgmap v433: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 4.0 KiB/s rd, 3 op/s 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:40 vm07 bash[28052]: cluster 2026-03-09T21:22:39.998049+0000 mon.a (mon.0) 1260 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:40 vm07 bash[28052]: cluster 2026-03-09T21:22:39.998049+0000 mon.a (mon.0) 1260 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:40 vm07 bash[28052]: audit 2026-03-09T21:22:40.002492+0000 mon.a (mon.0) 1261 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:40 vm07 bash[28052]: audit 2026-03-09T21:22:40.002492+0000 mon.a (mon.0) 1261 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:40 vm07 bash[28052]: cluster 2026-03-09T21:22:40.012082+0000 mon.a (mon.0) 1262 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:40 vm07 bash[28052]: cluster 2026-03-09T21:22:40.012082+0000 mon.a (mon.0) 1262 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:40 vm07 bash[28052]: audit 2026-03-09T21:22:40.013064+0000 mon.a (mon.0) 1263 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-09T21:22:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:40 vm07 bash[28052]: audit 2026-03-09T21:22:40.013064+0000 mon.a (mon.0) 1263 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-09T21:22:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:40 vm10 bash[23387]: cluster 2026-03-09T21:22:39.005144+0000 mon.a (mon.0) 1257 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-09T21:22:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:40 vm10 bash[23387]: cluster 2026-03-09T21:22:39.005144+0000 mon.a (mon.0) 1257 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-09T21:22:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:40 vm10 bash[23387]: audit 2026-03-09T21:22:39.007485+0000 mon.a (mon.0) 1258 : audit [DBG] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "bar", "format": "json"}]: dispatch 2026-03-09T21:22:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:40 vm10 bash[23387]: audit 2026-03-09T21:22:39.007485+0000 mon.a (mon.0) 1258 : audit [DBG] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "bar", "format": "json"}]: dispatch 2026-03-09T21:22:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:40 vm10 bash[23387]: audit 2026-03-09T21:22:39.007747+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T21:22:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:40 vm10 bash[23387]: audit 2026-03-09T21:22:39.007747+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T21:22:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:40 vm10 bash[23387]: cluster 2026-03-09T21:22:39.793144+0000 mgr.y (mgr.24416) 251 : cluster [DBG] pgmap v433: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 4.0 KiB/s rd, 3 op/s 2026-03-09T21:22:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:40 vm10 bash[23387]: cluster 2026-03-09T21:22:39.793144+0000 mgr.y (mgr.24416) 251 : cluster [DBG] pgmap v433: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 4.0 KiB/s rd, 3 op/s 2026-03-09T21:22:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:40 vm10 bash[23387]: cluster 2026-03-09T21:22:39.998049+0000 mon.a (mon.0) 1260 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T21:22:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:40 vm10 bash[23387]: cluster 2026-03-09T21:22:39.998049+0000 mon.a (mon.0) 1260 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T21:22:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:40 vm10 bash[23387]: audit 2026-03-09T21:22:40.002492+0000 mon.a (mon.0) 1261 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T21:22:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:40 vm10 bash[23387]: audit 2026-03-09T21:22:40.002492+0000 mon.a (mon.0) 1261 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T21:22:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:40 vm10 bash[23387]: cluster 2026-03-09T21:22:40.012082+0000 mon.a (mon.0) 1262 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-09T21:22:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:40 vm10 bash[23387]: cluster 2026-03-09T21:22:40.012082+0000 mon.a (mon.0) 1262 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-09T21:22:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:40 vm10 bash[23387]: audit 2026-03-09T21:22:40.013064+0000 mon.a (mon.0) 1263 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-09T21:22:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:40 vm10 bash[23387]: audit 2026-03-09T21:22:40.013064+0000 mon.a (mon.0) 1263 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-09T21:22:41.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:41 vm07 bash[20771]: cluster 2026-03-09T21:22:41.003020+0000 mon.a (mon.0) 1264 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T21:22:41.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:41 vm07 bash[20771]: cluster 2026-03-09T21:22:41.003020+0000 mon.a (mon.0) 1264 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T21:22:41.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:41 vm07 bash[20771]: audit 2026-03-09T21:22:41.005704+0000 mon.a (mon.0) 1265 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["1", "7", "2"]}]': finished 2026-03-09T21:22:41.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:41 vm07 bash[20771]: audit 2026-03-09T21:22:41.005704+0000 mon.a (mon.0) 1265 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["1", "7", "2"]}]': finished 2026-03-09T21:22:41.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:41 vm07 bash[20771]: cluster 2026-03-09T21:22:41.011176+0000 mon.a (mon.0) 1266 : cluster [DBG] osdmap e319: 8 total, 5 up, 8 in 2026-03-09T21:22:41.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:41 vm07 bash[20771]: cluster 2026-03-09T21:22:41.011176+0000 mon.a (mon.0) 1266 : cluster [DBG] osdmap e319: 8 total, 5 up, 8 in 2026-03-09T21:22:41.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:41 vm07 bash[28052]: cluster 2026-03-09T21:22:41.003020+0000 mon.a (mon.0) 1264 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T21:22:41.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:41 vm07 bash[28052]: cluster 2026-03-09T21:22:41.003020+0000 mon.a (mon.0) 1264 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T21:22:41.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:41 vm07 bash[28052]: audit 2026-03-09T21:22:41.005704+0000 mon.a (mon.0) 1265 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["1", "7", "2"]}]': finished 2026-03-09T21:22:41.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:41 vm07 bash[28052]: audit 2026-03-09T21:22:41.005704+0000 mon.a (mon.0) 1265 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["1", "7", "2"]}]': finished 2026-03-09T21:22:41.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:41 vm07 bash[28052]: cluster 2026-03-09T21:22:41.011176+0000 mon.a (mon.0) 1266 : cluster [DBG] osdmap e319: 8 total, 5 up, 8 in 2026-03-09T21:22:41.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:41 vm07 bash[28052]: cluster 2026-03-09T21:22:41.011176+0000 mon.a (mon.0) 1266 : cluster [DBG] osdmap e319: 8 total, 5 up, 8 in 2026-03-09T21:22:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:41 vm10 bash[23387]: cluster 2026-03-09T21:22:41.003020+0000 mon.a (mon.0) 1264 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T21:22:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:41 vm10 bash[23387]: cluster 2026-03-09T21:22:41.003020+0000 mon.a (mon.0) 1264 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T21:22:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:41 vm10 bash[23387]: audit 2026-03-09T21:22:41.005704+0000 mon.a (mon.0) 1265 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["1", "7", "2"]}]': finished 2026-03-09T21:22:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:41 vm10 bash[23387]: audit 2026-03-09T21:22:41.005704+0000 mon.a (mon.0) 1265 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["1", "7", "2"]}]': finished 2026-03-09T21:22:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:41 vm10 bash[23387]: cluster 2026-03-09T21:22:41.011176+0000 mon.a (mon.0) 1266 : cluster [DBG] osdmap e319: 8 total, 5 up, 8 in 2026-03-09T21:22:41.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:41 vm10 bash[23387]: cluster 2026-03-09T21:22:41.011176+0000 mon.a (mon.0) 1266 : cluster [DBG] osdmap e319: 8 total, 5 up, 8 in 2026-03-09T21:22:42.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:42 vm07 bash[20771]: cluster 2026-03-09T21:22:41.672268+0000 mon.a (mon.0) 1267 : cluster [INF] osd.1 marked itself dead as of e319 2026-03-09T21:22:42.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:42 vm07 bash[20771]: cluster 2026-03-09T21:22:41.672268+0000 mon.a (mon.0) 1267 : cluster [INF] osd.1 marked itself dead as of e319 2026-03-09T21:22:42.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:42 vm07 bash[20771]: cluster 2026-03-09T21:22:41.793447+0000 mgr.y (mgr.24416) 252 : cluster [DBG] pgmap v436: 196 pgs: 60 stale+active+clean, 32 unknown, 104 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:42.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:42 vm07 bash[20771]: cluster 2026-03-09T21:22:41.793447+0000 mgr.y (mgr.24416) 252 : cluster [DBG] pgmap v436: 196 pgs: 60 stale+active+clean, 32 unknown, 104 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:42.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:42 vm07 bash[20771]: cluster 2026-03-09T21:22:42.012848+0000 mon.a (mon.0) 1268 : cluster [DBG] osdmap e320: 8 total, 5 up, 8 in 2026-03-09T21:22:42.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:42 vm07 bash[20771]: cluster 2026-03-09T21:22:42.012848+0000 mon.a (mon.0) 1268 : cluster [DBG] osdmap e320: 8 total, 5 up, 8 in 2026-03-09T21:22:42.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:42 vm07 bash[28052]: cluster 2026-03-09T21:22:41.672268+0000 mon.a (mon.0) 1267 : cluster [INF] osd.1 marked itself dead as of e319 2026-03-09T21:22:42.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:42 vm07 bash[28052]: cluster 2026-03-09T21:22:41.672268+0000 mon.a (mon.0) 1267 : cluster [INF] osd.1 marked itself dead as of e319 2026-03-09T21:22:42.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:42 vm07 bash[28052]: cluster 2026-03-09T21:22:41.793447+0000 mgr.y (mgr.24416) 252 : cluster [DBG] pgmap v436: 196 pgs: 60 stale+active+clean, 32 unknown, 104 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:42.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:42 vm07 bash[28052]: cluster 2026-03-09T21:22:41.793447+0000 mgr.y (mgr.24416) 252 : cluster [DBG] pgmap v436: 196 pgs: 60 stale+active+clean, 32 unknown, 104 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:42.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:42 vm07 bash[28052]: cluster 2026-03-09T21:22:42.012848+0000 mon.a (mon.0) 1268 : cluster [DBG] osdmap e320: 8 total, 5 up, 8 in 2026-03-09T21:22:42.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:42 vm07 bash[28052]: cluster 2026-03-09T21:22:42.012848+0000 mon.a (mon.0) 1268 : cluster [DBG] osdmap e320: 8 total, 5 up, 8 in 2026-03-09T21:22:42.115 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:22:41 vm07 bash[36993]: debug 2026-03-09T21:22:41.817+0000 7f6fc2fb1640 -1 osd.1 319 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T21:22:42.115 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:22:42 vm07 bash[36993]: debug 2026-03-09T21:22:42.021+0000 7f6fb6b9b640 -1 osd.1 320 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T21:22:42.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:42 vm10 bash[23387]: cluster 2026-03-09T21:22:41.672268+0000 mon.a (mon.0) 1267 : cluster [INF] osd.1 marked itself dead as of e319 2026-03-09T21:22:42.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:42 vm10 bash[23387]: cluster 2026-03-09T21:22:41.672268+0000 mon.a (mon.0) 1267 : cluster [INF] osd.1 marked itself dead as of e319 2026-03-09T21:22:42.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:42 vm10 bash[23387]: cluster 2026-03-09T21:22:41.793447+0000 mgr.y (mgr.24416) 252 : cluster [DBG] pgmap v436: 196 pgs: 60 stale+active+clean, 32 unknown, 104 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:42.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:42 vm10 bash[23387]: cluster 2026-03-09T21:22:41.793447+0000 mgr.y (mgr.24416) 252 : cluster [DBG] pgmap v436: 196 pgs: 60 stale+active+clean, 32 unknown, 104 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:42.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:42 vm10 bash[23387]: cluster 2026-03-09T21:22:42.012848+0000 mon.a (mon.0) 1268 : cluster [DBG] osdmap e320: 8 total, 5 up, 8 in 2026-03-09T21:22:42.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:42 vm10 bash[23387]: cluster 2026-03-09T21:22:42.012848+0000 mon.a (mon.0) 1268 : cluster [DBG] osdmap e320: 8 total, 5 up, 8 in 2026-03-09T21:22:43.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:43 vm07 bash[20771]: cluster 2026-03-09T21:22:41.671899+0000 osd.1 (osd.1) 3 : cluster [WRN] Monitor daemon marked osd.1 down, but it is still running 2026-03-09T21:22:43.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:43 vm07 bash[20771]: cluster 2026-03-09T21:22:41.671899+0000 osd.1 (osd.1) 3 : cluster [WRN] Monitor daemon marked osd.1 down, but it is still running 2026-03-09T21:22:43.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:43 vm07 bash[20771]: cluster 2026-03-09T21:22:41.671902+0000 osd.1 (osd.1) 4 : cluster [DBG] map e319 wrongly marked me down at e319 2026-03-09T21:22:43.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:43 vm07 bash[20771]: cluster 2026-03-09T21:22:41.671902+0000 osd.1 (osd.1) 4 : cluster [DBG] map e319 wrongly marked me down at e319 2026-03-09T21:22:43.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:43 vm07 bash[20771]: audit 2026-03-09T21:22:42.058424+0000 mon.a (mon.0) 1269 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:22:43.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:43 vm07 bash[20771]: audit 2026-03-09T21:22:42.058424+0000 mon.a (mon.0) 1269 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:22:43.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:43 vm07 bash[20771]: audit 2026-03-09T21:22:42.062023+0000 mon.c (mon.2) 119 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:22:43.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:43 vm07 bash[20771]: audit 2026-03-09T21:22:42.062023+0000 mon.c (mon.2) 119 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:22:43.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:43 vm07 bash[28052]: cluster 2026-03-09T21:22:41.671899+0000 osd.1 (osd.1) 3 : cluster [WRN] Monitor daemon marked osd.1 down, but it is still running 2026-03-09T21:22:43.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:43 vm07 bash[28052]: cluster 2026-03-09T21:22:41.671899+0000 osd.1 (osd.1) 3 : cluster [WRN] Monitor daemon marked osd.1 down, but it is still running 2026-03-09T21:22:43.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:43 vm07 bash[28052]: cluster 2026-03-09T21:22:41.671902+0000 osd.1 (osd.1) 4 : cluster [DBG] map e319 wrongly marked me down at e319 2026-03-09T21:22:43.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:43 vm07 bash[28052]: cluster 2026-03-09T21:22:41.671902+0000 osd.1 (osd.1) 4 : cluster [DBG] map e319 wrongly marked me down at e319 2026-03-09T21:22:43.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:43 vm07 bash[28052]: audit 2026-03-09T21:22:42.058424+0000 mon.a (mon.0) 1269 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:22:43.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:43 vm07 bash[28052]: audit 2026-03-09T21:22:42.058424+0000 mon.a (mon.0) 1269 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:22:43.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:43 vm07 bash[28052]: audit 2026-03-09T21:22:42.062023+0000 mon.c (mon.2) 119 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:22:43.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:43 vm07 bash[28052]: audit 2026-03-09T21:22:42.062023+0000 mon.c (mon.2) 119 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:22:43.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:43 vm10 bash[23387]: cluster 2026-03-09T21:22:41.671899+0000 osd.1 (osd.1) 3 : cluster [WRN] Monitor daemon marked osd.1 down, but it is still running 2026-03-09T21:22:43.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:43 vm10 bash[23387]: cluster 2026-03-09T21:22:41.671899+0000 osd.1 (osd.1) 3 : cluster [WRN] Monitor daemon marked osd.1 down, but it is still running 2026-03-09T21:22:43.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:43 vm10 bash[23387]: cluster 2026-03-09T21:22:41.671902+0000 osd.1 (osd.1) 4 : cluster [DBG] map e319 wrongly marked me down at e319 2026-03-09T21:22:43.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:43 vm10 bash[23387]: cluster 2026-03-09T21:22:41.671902+0000 osd.1 (osd.1) 4 : cluster [DBG] map e319 wrongly marked me down at e319 2026-03-09T21:22:43.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:43 vm10 bash[23387]: audit 2026-03-09T21:22:42.058424+0000 mon.a (mon.0) 1269 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:22:43.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:43 vm10 bash[23387]: audit 2026-03-09T21:22:42.058424+0000 mon.a (mon.0) 1269 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:22:43.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:43 vm10 bash[23387]: audit 2026-03-09T21:22:42.062023+0000 mon.c (mon.2) 119 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:22:43.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:43 vm10 bash[23387]: audit 2026-03-09T21:22:42.062023+0000 mon.c (mon.2) 119 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:22:44.361 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:44 vm07 bash[20771]: cluster 2026-03-09T21:22:43.794167+0000 mgr.y (mgr.24416) 253 : cluster [DBG] pgmap v438: 196 pgs: 8 undersized+degraded+peered+wait, 13 active+undersized+degraded+wait, 35 active+undersized, 1 unknown, 9 active+undersized+degraded, 28 undersized+peered+wait, 37 active+undersized+wait, 65 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 213 B/s rd, 0 op/s; 161/597 objects degraded (26.968%) 2026-03-09T21:22:44.361 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:44 vm07 bash[20771]: cluster 2026-03-09T21:22:43.794167+0000 mgr.y (mgr.24416) 253 : cluster [DBG] pgmap v438: 196 pgs: 8 undersized+degraded+peered+wait, 13 active+undersized+degraded+wait, 35 active+undersized, 1 unknown, 9 active+undersized+degraded, 28 undersized+peered+wait, 37 active+undersized+wait, 65 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 213 B/s rd, 0 op/s; 161/597 objects degraded (26.968%) 2026-03-09T21:22:44.361 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:44 vm07 bash[20771]: cluster 2026-03-09T21:22:43.865943+0000 mon.a (mon.0) 1270 : cluster [INF] osd.2 marked itself dead as of e320 2026-03-09T21:22:44.361 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:44 vm07 bash[20771]: cluster 2026-03-09T21:22:43.865943+0000 mon.a (mon.0) 1270 : cluster [INF] osd.2 marked itself dead as of e320 2026-03-09T21:22:44.362 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:44 vm07 bash[20771]: cluster 2026-03-09T21:22:43.871636+0000 osd.7 (osd.7) 7 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-09T21:22:44.362 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:44 vm07 bash[20771]: cluster 2026-03-09T21:22:43.871636+0000 osd.7 (osd.7) 7 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-09T21:22:44.362 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:44 vm07 bash[20771]: cluster 2026-03-09T21:22:43.871638+0000 osd.7 (osd.7) 8 : cluster [DBG] map e320 wrongly marked me down at e319 2026-03-09T21:22:44.362 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:44 vm07 bash[20771]: cluster 2026-03-09T21:22:43.871638+0000 osd.7 (osd.7) 8 : cluster [DBG] map e320 wrongly marked me down at e319 2026-03-09T21:22:44.362 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:44 vm07 bash[20771]: cluster 2026-03-09T21:22:43.872542+0000 mon.a (mon.0) 1271 : cluster [INF] osd.7 marked itself dead as of e320 2026-03-09T21:22:44.362 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:44 vm07 bash[20771]: cluster 2026-03-09T21:22:43.872542+0000 mon.a (mon.0) 1271 : cluster [INF] osd.7 marked itself dead as of e320 2026-03-09T21:22:44.362 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:44 vm07 bash[20771]: audit 2026-03-09T21:22:44.013443+0000 mon.a (mon.0) 1272 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:44.362 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:44 vm07 bash[20771]: audit 2026-03-09T21:22:44.013443+0000 mon.a (mon.0) 1272 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:44.362 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:44 vm07 bash[28052]: cluster 2026-03-09T21:22:43.794167+0000 mgr.y (mgr.24416) 253 : cluster [DBG] pgmap v438: 196 pgs: 8 undersized+degraded+peered+wait, 13 active+undersized+degraded+wait, 35 active+undersized, 1 unknown, 9 active+undersized+degraded, 28 undersized+peered+wait, 37 active+undersized+wait, 65 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 213 B/s rd, 0 op/s; 161/597 objects degraded (26.968%) 2026-03-09T21:22:44.362 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:44 vm07 bash[28052]: cluster 2026-03-09T21:22:43.794167+0000 mgr.y (mgr.24416) 253 : cluster [DBG] pgmap v438: 196 pgs: 8 undersized+degraded+peered+wait, 13 active+undersized+degraded+wait, 35 active+undersized, 1 unknown, 9 active+undersized+degraded, 28 undersized+peered+wait, 37 active+undersized+wait, 65 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 213 B/s rd, 0 op/s; 161/597 objects degraded (26.968%) 2026-03-09T21:22:44.362 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:44 vm07 bash[28052]: cluster 2026-03-09T21:22:43.865943+0000 mon.a (mon.0) 1270 : cluster [INF] osd.2 marked itself dead as of e320 2026-03-09T21:22:44.362 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:44 vm07 bash[28052]: cluster 2026-03-09T21:22:43.865943+0000 mon.a (mon.0) 1270 : cluster [INF] osd.2 marked itself dead as of e320 2026-03-09T21:22:44.362 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:44 vm07 bash[28052]: cluster 2026-03-09T21:22:43.871636+0000 osd.7 (osd.7) 7 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-09T21:22:44.362 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:44 vm07 bash[28052]: cluster 2026-03-09T21:22:43.871636+0000 osd.7 (osd.7) 7 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-09T21:22:44.362 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:44 vm07 bash[28052]: cluster 2026-03-09T21:22:43.871638+0000 osd.7 (osd.7) 8 : cluster [DBG] map e320 wrongly marked me down at e319 2026-03-09T21:22:44.362 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:44 vm07 bash[28052]: cluster 2026-03-09T21:22:43.871638+0000 osd.7 (osd.7) 8 : cluster [DBG] map e320 wrongly marked me down at e319 2026-03-09T21:22:44.362 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:44 vm07 bash[28052]: cluster 2026-03-09T21:22:43.872542+0000 mon.a (mon.0) 1271 : cluster [INF] osd.7 marked itself dead as of e320 2026-03-09T21:22:44.362 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:44 vm07 bash[28052]: cluster 2026-03-09T21:22:43.872542+0000 mon.a (mon.0) 1271 : cluster [INF] osd.7 marked itself dead as of e320 2026-03-09T21:22:44.362 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:44 vm07 bash[28052]: audit 2026-03-09T21:22:44.013443+0000 mon.a (mon.0) 1272 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:44.362 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:44 vm07 bash[28052]: audit 2026-03-09T21:22:44.013443+0000 mon.a (mon.0) 1272 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:44.362 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:22:44 vm07 bash[36993]: debug 2026-03-09T21:22:44.085+0000 7f6fbeddb640 -1 osd.1 321 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T21:22:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:44 vm10 bash[23387]: cluster 2026-03-09T21:22:43.794167+0000 mgr.y (mgr.24416) 253 : cluster [DBG] pgmap v438: 196 pgs: 8 undersized+degraded+peered+wait, 13 active+undersized+degraded+wait, 35 active+undersized, 1 unknown, 9 active+undersized+degraded, 28 undersized+peered+wait, 37 active+undersized+wait, 65 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 213 B/s rd, 0 op/s; 161/597 objects degraded (26.968%) 2026-03-09T21:22:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:44 vm10 bash[23387]: cluster 2026-03-09T21:22:43.794167+0000 mgr.y (mgr.24416) 253 : cluster [DBG] pgmap v438: 196 pgs: 8 undersized+degraded+peered+wait, 13 active+undersized+degraded+wait, 35 active+undersized, 1 unknown, 9 active+undersized+degraded, 28 undersized+peered+wait, 37 active+undersized+wait, 65 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 213 B/s rd, 0 op/s; 161/597 objects degraded (26.968%) 2026-03-09T21:22:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:44 vm10 bash[23387]: cluster 2026-03-09T21:22:43.865943+0000 mon.a (mon.0) 1270 : cluster [INF] osd.2 marked itself dead as of e320 2026-03-09T21:22:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:44 vm10 bash[23387]: cluster 2026-03-09T21:22:43.865943+0000 mon.a (mon.0) 1270 : cluster [INF] osd.2 marked itself dead as of e320 2026-03-09T21:22:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:44 vm10 bash[23387]: cluster 2026-03-09T21:22:43.871636+0000 osd.7 (osd.7) 7 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-09T21:22:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:44 vm10 bash[23387]: cluster 2026-03-09T21:22:43.871636+0000 osd.7 (osd.7) 7 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-09T21:22:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:44 vm10 bash[23387]: cluster 2026-03-09T21:22:43.871638+0000 osd.7 (osd.7) 8 : cluster [DBG] map e320 wrongly marked me down at e319 2026-03-09T21:22:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:44 vm10 bash[23387]: cluster 2026-03-09T21:22:43.871638+0000 osd.7 (osd.7) 8 : cluster [DBG] map e320 wrongly marked me down at e319 2026-03-09T21:22:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:44 vm10 bash[23387]: cluster 2026-03-09T21:22:43.872542+0000 mon.a (mon.0) 1271 : cluster [INF] osd.7 marked itself dead as of e320 2026-03-09T21:22:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:44 vm10 bash[23387]: cluster 2026-03-09T21:22:43.872542+0000 mon.a (mon.0) 1271 : cluster [INF] osd.7 marked itself dead as of e320 2026-03-09T21:22:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:44 vm10 bash[23387]: audit 2026-03-09T21:22:44.013443+0000 mon.a (mon.0) 1272 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:44.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:44 vm10 bash[23387]: audit 2026-03-09T21:22:44.013443+0000 mon.a (mon.0) 1272 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:44.442 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:22:44 vm10 bash[44771]: debug 2026-03-09T21:22:44.145+0000 7fa1e68f2640 -1 osd.7 321 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T21:22:44.615 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 21:22:44 vm07 bash[42797]: debug 2026-03-09T21:22:44.357+0000 7f2904cce640 -1 osd.2 321 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T21:22:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:45 vm07 bash[20771]: cluster 2026-03-09T21:22:43.859991+0000 osd.2 (osd.2) 5 : cluster [WRN] Monitor daemon marked osd.2 down, but it is still running 2026-03-09T21:22:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:45 vm07 bash[20771]: cluster 2026-03-09T21:22:43.859991+0000 osd.2 (osd.2) 5 : cluster [WRN] Monitor daemon marked osd.2 down, but it is still running 2026-03-09T21:22:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:45 vm07 bash[20771]: cluster 2026-03-09T21:22:43.859994+0000 osd.2 (osd.2) 6 : cluster [DBG] map e320 wrongly marked me down at e319 2026-03-09T21:22:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:45 vm07 bash[20771]: cluster 2026-03-09T21:22:43.859994+0000 osd.2 (osd.2) 6 : cluster [DBG] map e320 wrongly marked me down at e319 2026-03-09T21:22:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:45 vm07 bash[20771]: cluster 2026-03-09T21:22:44.059721+0000 mon.a (mon.0) 1273 : cluster [WRN] Health check failed: Degraded data redundancy: 161/597 objects degraded (26.968%), 30 pgs degraded (PG_DEGRADED) 2026-03-09T21:22:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:45 vm07 bash[20771]: cluster 2026-03-09T21:22:44.059721+0000 mon.a (mon.0) 1273 : cluster [WRN] Health check failed: Degraded data redundancy: 161/597 objects degraded (26.968%), 30 pgs degraded (PG_DEGRADED) 2026-03-09T21:22:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:45 vm07 bash[20771]: cluster 2026-03-09T21:22:44.059921+0000 mon.a (mon.0) 1274 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T21:22:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:45 vm07 bash[20771]: cluster 2026-03-09T21:22:44.059921+0000 mon.a (mon.0) 1274 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T21:22:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:45 vm07 bash[20771]: audit 2026-03-09T21:22:44.079745+0000 mon.a (mon.0) 1275 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:45 vm07 bash[20771]: audit 2026-03-09T21:22:44.079745+0000 mon.a (mon.0) 1275 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:45 vm07 bash[20771]: cluster 2026-03-09T21:22:44.088687+0000 mon.a (mon.0) 1276 : cluster [DBG] osdmap e321: 8 total, 5 up, 8 in 2026-03-09T21:22:45.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:45 vm07 bash[20771]: cluster 2026-03-09T21:22:44.088687+0000 mon.a (mon.0) 1276 : cluster [DBG] osdmap e321: 8 total, 5 up, 8 in 2026-03-09T21:22:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:45 vm07 bash[28052]: cluster 2026-03-09T21:22:43.859991+0000 osd.2 (osd.2) 5 : cluster [WRN] Monitor daemon marked osd.2 down, but it is still running 2026-03-09T21:22:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:45 vm07 bash[28052]: cluster 2026-03-09T21:22:43.859991+0000 osd.2 (osd.2) 5 : cluster [WRN] Monitor daemon marked osd.2 down, but it is still running 2026-03-09T21:22:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:45 vm07 bash[28052]: cluster 2026-03-09T21:22:43.859994+0000 osd.2 (osd.2) 6 : cluster [DBG] map e320 wrongly marked me down at e319 2026-03-09T21:22:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:45 vm07 bash[28052]: cluster 2026-03-09T21:22:43.859994+0000 osd.2 (osd.2) 6 : cluster [DBG] map e320 wrongly marked me down at e319 2026-03-09T21:22:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:45 vm07 bash[28052]: cluster 2026-03-09T21:22:44.059721+0000 mon.a (mon.0) 1273 : cluster [WRN] Health check failed: Degraded data redundancy: 161/597 objects degraded (26.968%), 30 pgs degraded (PG_DEGRADED) 2026-03-09T21:22:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:45 vm07 bash[28052]: cluster 2026-03-09T21:22:44.059721+0000 mon.a (mon.0) 1273 : cluster [WRN] Health check failed: Degraded data redundancy: 161/597 objects degraded (26.968%), 30 pgs degraded (PG_DEGRADED) 2026-03-09T21:22:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:45 vm07 bash[28052]: cluster 2026-03-09T21:22:44.059921+0000 mon.a (mon.0) 1274 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T21:22:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:45 vm07 bash[28052]: cluster 2026-03-09T21:22:44.059921+0000 mon.a (mon.0) 1274 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T21:22:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:45 vm07 bash[28052]: audit 2026-03-09T21:22:44.079745+0000 mon.a (mon.0) 1275 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:45 vm07 bash[28052]: audit 2026-03-09T21:22:44.079745+0000 mon.a (mon.0) 1275 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:45 vm07 bash[28052]: cluster 2026-03-09T21:22:44.088687+0000 mon.a (mon.0) 1276 : cluster [DBG] osdmap e321: 8 total, 5 up, 8 in 2026-03-09T21:22:45.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:45 vm07 bash[28052]: cluster 2026-03-09T21:22:44.088687+0000 mon.a (mon.0) 1276 : cluster [DBG] osdmap e321: 8 total, 5 up, 8 in 2026-03-09T21:22:45.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:45 vm10 bash[23387]: cluster 2026-03-09T21:22:43.859991+0000 osd.2 (osd.2) 5 : cluster [WRN] Monitor daemon marked osd.2 down, but it is still running 2026-03-09T21:22:45.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:45 vm10 bash[23387]: cluster 2026-03-09T21:22:43.859991+0000 osd.2 (osd.2) 5 : cluster [WRN] Monitor daemon marked osd.2 down, but it is still running 2026-03-09T21:22:45.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:45 vm10 bash[23387]: cluster 2026-03-09T21:22:43.859994+0000 osd.2 (osd.2) 6 : cluster [DBG] map e320 wrongly marked me down at e319 2026-03-09T21:22:45.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:45 vm10 bash[23387]: cluster 2026-03-09T21:22:43.859994+0000 osd.2 (osd.2) 6 : cluster [DBG] map e320 wrongly marked me down at e319 2026-03-09T21:22:45.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:45 vm10 bash[23387]: cluster 2026-03-09T21:22:44.059721+0000 mon.a (mon.0) 1273 : cluster [WRN] Health check failed: Degraded data redundancy: 161/597 objects degraded (26.968%), 30 pgs degraded (PG_DEGRADED) 2026-03-09T21:22:45.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:45 vm10 bash[23387]: cluster 2026-03-09T21:22:44.059721+0000 mon.a (mon.0) 1273 : cluster [WRN] Health check failed: Degraded data redundancy: 161/597 objects degraded (26.968%), 30 pgs degraded (PG_DEGRADED) 2026-03-09T21:22:45.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:45 vm10 bash[23387]: cluster 2026-03-09T21:22:44.059921+0000 mon.a (mon.0) 1274 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T21:22:45.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:45 vm10 bash[23387]: cluster 2026-03-09T21:22:44.059921+0000 mon.a (mon.0) 1274 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T21:22:45.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:45 vm10 bash[23387]: audit 2026-03-09T21:22:44.079745+0000 mon.a (mon.0) 1275 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:45.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:45 vm10 bash[23387]: audit 2026-03-09T21:22:44.079745+0000 mon.a (mon.0) 1275 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:45.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:45 vm10 bash[23387]: cluster 2026-03-09T21:22:44.088687+0000 mon.a (mon.0) 1276 : cluster [DBG] osdmap e321: 8 total, 5 up, 8 in 2026-03-09T21:22:45.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:45 vm10 bash[23387]: cluster 2026-03-09T21:22:44.088687+0000 mon.a (mon.0) 1276 : cluster [DBG] osdmap e321: 8 total, 5 up, 8 in 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:46 vm07 bash[20771]: cluster 2026-03-09T21:22:45.080465+0000 mon.a (mon.0) 1277 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:46 vm07 bash[20771]: cluster 2026-03-09T21:22:45.080465+0000 mon.a (mon.0) 1277 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:46 vm07 bash[20771]: audit 2026-03-09T21:22:45.095733+0000 mon.c (mon.2) 120 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:46 vm07 bash[20771]: audit 2026-03-09T21:22:45.095733+0000 mon.c (mon.2) 120 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:46 vm07 bash[20771]: audit 2026-03-09T21:22:45.095803+0000 mon.c (mon.2) 121 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:46 vm07 bash[20771]: audit 2026-03-09T21:22:45.095803+0000 mon.c (mon.2) 121 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:46 vm07 bash[20771]: audit 2026-03-09T21:22:45.095835+0000 mon.c (mon.2) 122 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:46 vm07 bash[20771]: audit 2026-03-09T21:22:45.095835+0000 mon.c (mon.2) 122 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:46 vm07 bash[20771]: cluster 2026-03-09T21:22:45.110913+0000 mon.a (mon.0) 1278 : cluster [INF] osd.1 v2:192.168.123.107:6805/4103893323 boot 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:46 vm07 bash[20771]: cluster 2026-03-09T21:22:45.110913+0000 mon.a (mon.0) 1278 : cluster [INF] osd.1 v2:192.168.123.107:6805/4103893323 boot 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:46 vm07 bash[20771]: cluster 2026-03-09T21:22:45.111008+0000 mon.a (mon.0) 1279 : cluster [INF] osd.7 v2:192.168.123.110:6812/2049527874 boot 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:46 vm07 bash[20771]: cluster 2026-03-09T21:22:45.111008+0000 mon.a (mon.0) 1279 : cluster [INF] osd.7 v2:192.168.123.110:6812/2049527874 boot 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:46 vm07 bash[20771]: cluster 2026-03-09T21:22:45.111129+0000 mon.a (mon.0) 1280 : cluster [INF] osd.2 v2:192.168.123.107:6809/2553486713 boot 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:46 vm07 bash[20771]: cluster 2026-03-09T21:22:45.111129+0000 mon.a (mon.0) 1280 : cluster [INF] osd.2 v2:192.168.123.107:6809/2553486713 boot 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:46 vm07 bash[20771]: cluster 2026-03-09T21:22:45.111280+0000 mon.a (mon.0) 1281 : cluster [DBG] osdmap e322: 8 total, 8 up, 8 in 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:46 vm07 bash[20771]: cluster 2026-03-09T21:22:45.111280+0000 mon.a (mon.0) 1281 : cluster [DBG] osdmap e322: 8 total, 8 up, 8 in 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:46 vm07 bash[20771]: cluster 2026-03-09T21:22:45.794543+0000 mgr.y (mgr.24416) 254 : cluster [DBG] pgmap v441: 196 pgs: 33 stale+active+clean, 8 undersized+degraded+peered+wait, 13 active+undersized+degraded+wait, 35 active+undersized, 1 unknown, 9 active+undersized+degraded, 28 undersized+peered+wait, 37 active+undersized+wait, 32 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 213 B/s rd, 0 op/s; 161/597 objects degraded (26.968%) 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:46 vm07 bash[20771]: cluster 2026-03-09T21:22:45.794543+0000 mgr.y (mgr.24416) 254 : cluster [DBG] pgmap v441: 196 pgs: 33 stale+active+clean, 8 undersized+degraded+peered+wait, 13 active+undersized+degraded+wait, 35 active+undersized, 1 unknown, 9 active+undersized+degraded, 28 undersized+peered+wait, 37 active+undersized+wait, 32 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 213 B/s rd, 0 op/s; 161/597 objects degraded (26.968%) 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:46 vm07 bash[28052]: cluster 2026-03-09T21:22:45.080465+0000 mon.a (mon.0) 1277 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:46 vm07 bash[28052]: cluster 2026-03-09T21:22:45.080465+0000 mon.a (mon.0) 1277 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:46 vm07 bash[28052]: audit 2026-03-09T21:22:45.095733+0000 mon.c (mon.2) 120 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:46 vm07 bash[28052]: audit 2026-03-09T21:22:45.095733+0000 mon.c (mon.2) 120 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:46 vm07 bash[28052]: audit 2026-03-09T21:22:45.095803+0000 mon.c (mon.2) 121 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:46 vm07 bash[28052]: audit 2026-03-09T21:22:45.095803+0000 mon.c (mon.2) 121 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:46 vm07 bash[28052]: audit 2026-03-09T21:22:45.095835+0000 mon.c (mon.2) 122 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:46 vm07 bash[28052]: audit 2026-03-09T21:22:45.095835+0000 mon.c (mon.2) 122 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:46 vm07 bash[28052]: cluster 2026-03-09T21:22:45.110913+0000 mon.a (mon.0) 1278 : cluster [INF] osd.1 v2:192.168.123.107:6805/4103893323 boot 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:46 vm07 bash[28052]: cluster 2026-03-09T21:22:45.110913+0000 mon.a (mon.0) 1278 : cluster [INF] osd.1 v2:192.168.123.107:6805/4103893323 boot 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:46 vm07 bash[28052]: cluster 2026-03-09T21:22:45.111008+0000 mon.a (mon.0) 1279 : cluster [INF] osd.7 v2:192.168.123.110:6812/2049527874 boot 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:46 vm07 bash[28052]: cluster 2026-03-09T21:22:45.111008+0000 mon.a (mon.0) 1279 : cluster [INF] osd.7 v2:192.168.123.110:6812/2049527874 boot 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:46 vm07 bash[28052]: cluster 2026-03-09T21:22:45.111129+0000 mon.a (mon.0) 1280 : cluster [INF] osd.2 v2:192.168.123.107:6809/2553486713 boot 2026-03-09T21:22:46.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:46 vm07 bash[28052]: cluster 2026-03-09T21:22:45.111129+0000 mon.a (mon.0) 1280 : cluster [INF] osd.2 v2:192.168.123.107:6809/2553486713 boot 2026-03-09T21:22:46.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:46 vm07 bash[28052]: cluster 2026-03-09T21:22:45.111280+0000 mon.a (mon.0) 1281 : cluster [DBG] osdmap e322: 8 total, 8 up, 8 in 2026-03-09T21:22:46.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:46 vm07 bash[28052]: cluster 2026-03-09T21:22:45.111280+0000 mon.a (mon.0) 1281 : cluster [DBG] osdmap e322: 8 total, 8 up, 8 in 2026-03-09T21:22:46.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:46 vm07 bash[28052]: cluster 2026-03-09T21:22:45.794543+0000 mgr.y (mgr.24416) 254 : cluster [DBG] pgmap v441: 196 pgs: 33 stale+active+clean, 8 undersized+degraded+peered+wait, 13 active+undersized+degraded+wait, 35 active+undersized, 1 unknown, 9 active+undersized+degraded, 28 undersized+peered+wait, 37 active+undersized+wait, 32 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 213 B/s rd, 0 op/s; 161/597 objects degraded (26.968%) 2026-03-09T21:22:46.366 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:46 vm07 bash[28052]: cluster 2026-03-09T21:22:45.794543+0000 mgr.y (mgr.24416) 254 : cluster [DBG] pgmap v441: 196 pgs: 33 stale+active+clean, 8 undersized+degraded+peered+wait, 13 active+undersized+degraded+wait, 35 active+undersized, 1 unknown, 9 active+undersized+degraded, 28 undersized+peered+wait, 37 active+undersized+wait, 32 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 213 B/s rd, 0 op/s; 161/597 objects degraded (26.968%) 2026-03-09T21:22:46.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:46 vm10 bash[23387]: cluster 2026-03-09T21:22:45.080465+0000 mon.a (mon.0) 1277 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T21:22:46.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:46 vm10 bash[23387]: cluster 2026-03-09T21:22:45.080465+0000 mon.a (mon.0) 1277 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T21:22:46.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:46 vm10 bash[23387]: audit 2026-03-09T21:22:45.095733+0000 mon.c (mon.2) 120 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:22:46.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:46 vm10 bash[23387]: audit 2026-03-09T21:22:45.095733+0000 mon.c (mon.2) 120 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T21:22:46.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:46 vm10 bash[23387]: audit 2026-03-09T21:22:45.095803+0000 mon.c (mon.2) 121 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:22:46.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:46 vm10 bash[23387]: audit 2026-03-09T21:22:45.095803+0000 mon.c (mon.2) 121 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T21:22:46.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:46 vm10 bash[23387]: audit 2026-03-09T21:22:45.095835+0000 mon.c (mon.2) 122 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:22:46.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:46 vm10 bash[23387]: audit 2026-03-09T21:22:45.095835+0000 mon.c (mon.2) 122 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T21:22:46.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:46 vm10 bash[23387]: cluster 2026-03-09T21:22:45.110913+0000 mon.a (mon.0) 1278 : cluster [INF] osd.1 v2:192.168.123.107:6805/4103893323 boot 2026-03-09T21:22:46.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:46 vm10 bash[23387]: cluster 2026-03-09T21:22:45.110913+0000 mon.a (mon.0) 1278 : cluster [INF] osd.1 v2:192.168.123.107:6805/4103893323 boot 2026-03-09T21:22:46.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:46 vm10 bash[23387]: cluster 2026-03-09T21:22:45.111008+0000 mon.a (mon.0) 1279 : cluster [INF] osd.7 v2:192.168.123.110:6812/2049527874 boot 2026-03-09T21:22:46.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:46 vm10 bash[23387]: cluster 2026-03-09T21:22:45.111008+0000 mon.a (mon.0) 1279 : cluster [INF] osd.7 v2:192.168.123.110:6812/2049527874 boot 2026-03-09T21:22:46.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:46 vm10 bash[23387]: cluster 2026-03-09T21:22:45.111129+0000 mon.a (mon.0) 1280 : cluster [INF] osd.2 v2:192.168.123.107:6809/2553486713 boot 2026-03-09T21:22:46.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:46 vm10 bash[23387]: cluster 2026-03-09T21:22:45.111129+0000 mon.a (mon.0) 1280 : cluster [INF] osd.2 v2:192.168.123.107:6809/2553486713 boot 2026-03-09T21:22:46.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:46 vm10 bash[23387]: cluster 2026-03-09T21:22:45.111280+0000 mon.a (mon.0) 1281 : cluster [DBG] osdmap e322: 8 total, 8 up, 8 in 2026-03-09T21:22:46.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:46 vm10 bash[23387]: cluster 2026-03-09T21:22:45.111280+0000 mon.a (mon.0) 1281 : cluster [DBG] osdmap e322: 8 total, 8 up, 8 in 2026-03-09T21:22:46.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:46 vm10 bash[23387]: cluster 2026-03-09T21:22:45.794543+0000 mgr.y (mgr.24416) 254 : cluster [DBG] pgmap v441: 196 pgs: 33 stale+active+clean, 8 undersized+degraded+peered+wait, 13 active+undersized+degraded+wait, 35 active+undersized, 1 unknown, 9 active+undersized+degraded, 28 undersized+peered+wait, 37 active+undersized+wait, 32 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 213 B/s rd, 0 op/s; 161/597 objects degraded (26.968%) 2026-03-09T21:22:46.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:46 vm10 bash[23387]: cluster 2026-03-09T21:22:45.794543+0000 mgr.y (mgr.24416) 254 : cluster [DBG] pgmap v441: 196 pgs: 33 stale+active+clean, 8 undersized+degraded+peered+wait, 13 active+undersized+degraded+wait, 35 active+undersized, 1 unknown, 9 active+undersized+degraded, 28 undersized+peered+wait, 37 active+undersized+wait, 32 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 213 B/s rd, 0 op/s; 161/597 objects degraded (26.968%) 2026-03-09T21:22:46.942 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:22:46 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:22:47.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:47 vm07 bash[20771]: cluster 2026-03-09T21:22:46.123840+0000 mon.a (mon.0) 1282 : cluster [DBG] osdmap e323: 8 total, 8 up, 8 in 2026-03-09T21:22:47.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:47 vm07 bash[20771]: cluster 2026-03-09T21:22:46.123840+0000 mon.a (mon.0) 1282 : cluster [DBG] osdmap e323: 8 total, 8 up, 8 in 2026-03-09T21:22:47.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:47 vm07 bash[20771]: audit 2026-03-09T21:22:46.462599+0000 mgr.y (mgr.24416) 255 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:47.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:47 vm07 bash[20771]: audit 2026-03-09T21:22:46.462599+0000 mgr.y (mgr.24416) 255 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:47.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:47 vm07 bash[28052]: cluster 2026-03-09T21:22:46.123840+0000 mon.a (mon.0) 1282 : cluster [DBG] osdmap e323: 8 total, 8 up, 8 in 2026-03-09T21:22:47.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:47 vm07 bash[28052]: cluster 2026-03-09T21:22:46.123840+0000 mon.a (mon.0) 1282 : cluster [DBG] osdmap e323: 8 total, 8 up, 8 in 2026-03-09T21:22:47.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:47 vm07 bash[28052]: audit 2026-03-09T21:22:46.462599+0000 mgr.y (mgr.24416) 255 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:47.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:47 vm07 bash[28052]: audit 2026-03-09T21:22:46.462599+0000 mgr.y (mgr.24416) 255 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:47.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:47 vm10 bash[23387]: cluster 2026-03-09T21:22:46.123840+0000 mon.a (mon.0) 1282 : cluster [DBG] osdmap e323: 8 total, 8 up, 8 in 2026-03-09T21:22:47.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:47 vm10 bash[23387]: cluster 2026-03-09T21:22:46.123840+0000 mon.a (mon.0) 1282 : cluster [DBG] osdmap e323: 8 total, 8 up, 8 in 2026-03-09T21:22:47.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:47 vm10 bash[23387]: audit 2026-03-09T21:22:46.462599+0000 mgr.y (mgr.24416) 255 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:47.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:47 vm10 bash[23387]: audit 2026-03-09T21:22:46.462599+0000 mgr.y (mgr.24416) 255 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:48.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:48 vm07 bash[20771]: cluster 2026-03-09T21:22:47.795342+0000 mgr.y (mgr.24416) 256 : cluster [DBG] pgmap v443: 196 pgs: 196 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 708 B/s rd, 0 op/s 2026-03-09T21:22:48.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:48 vm07 bash[20771]: cluster 2026-03-09T21:22:47.795342+0000 mgr.y (mgr.24416) 256 : cluster [DBG] pgmap v443: 196 pgs: 196 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 708 B/s rd, 0 op/s 2026-03-09T21:22:48.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:48 vm07 bash[20771]: audit 2026-03-09T21:22:47.993996+0000 mon.a (mon.0) 1283 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:48.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:48 vm07 bash[20771]: audit 2026-03-09T21:22:47.993996+0000 mon.a (mon.0) 1283 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:48.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:48 vm07 bash[28052]: cluster 2026-03-09T21:22:47.795342+0000 mgr.y (mgr.24416) 256 : cluster [DBG] pgmap v443: 196 pgs: 196 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 708 B/s rd, 0 op/s 2026-03-09T21:22:48.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:48 vm07 bash[28052]: cluster 2026-03-09T21:22:47.795342+0000 mgr.y (mgr.24416) 256 : cluster [DBG] pgmap v443: 196 pgs: 196 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 708 B/s rd, 0 op/s 2026-03-09T21:22:48.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:48 vm07 bash[28052]: audit 2026-03-09T21:22:47.993996+0000 mon.a (mon.0) 1283 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:48.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:48 vm07 bash[28052]: audit 2026-03-09T21:22:47.993996+0000 mon.a (mon.0) 1283 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:48.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:48 vm10 bash[23387]: cluster 2026-03-09T21:22:47.795342+0000 mgr.y (mgr.24416) 256 : cluster [DBG] pgmap v443: 196 pgs: 196 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 708 B/s rd, 0 op/s 2026-03-09T21:22:48.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:48 vm10 bash[23387]: cluster 2026-03-09T21:22:47.795342+0000 mgr.y (mgr.24416) 256 : cluster [DBG] pgmap v443: 196 pgs: 196 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 708 B/s rd, 0 op/s 2026-03-09T21:22:48.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:48 vm10 bash[23387]: audit 2026-03-09T21:22:47.993996+0000 mon.a (mon.0) 1283 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:48.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:48 vm10 bash[23387]: audit 2026-03-09T21:22:47.993996+0000 mon.a (mon.0) 1283 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:49.115 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:22:48 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:22:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:22:49.221 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_read_wait_for_complete_and_cb_error PASSED [ 79%] 2026-03-09T21:22:49.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:49 vm07 bash[20771]: cluster 2026-03-09T21:22:48.185696+0000 mon.a (mon.0) 1284 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 161/597 objects degraded (26.968%), 30 pgs degraded) 2026-03-09T21:22:49.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:49 vm07 bash[20771]: cluster 2026-03-09T21:22:48.185696+0000 mon.a (mon.0) 1284 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 161/597 objects degraded (26.968%), 30 pgs degraded) 2026-03-09T21:22:49.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:49 vm07 bash[20771]: audit 2026-03-09T21:22:48.197283+0000 mon.a (mon.0) 1285 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:49.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:49 vm07 bash[20771]: audit 2026-03-09T21:22:48.197283+0000 mon.a (mon.0) 1285 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:49.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:49 vm07 bash[20771]: cluster 2026-03-09T21:22:48.209726+0000 mon.a (mon.0) 1286 : cluster [DBG] osdmap e324: 8 total, 8 up, 8 in 2026-03-09T21:22:49.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:49 vm07 bash[20771]: cluster 2026-03-09T21:22:48.209726+0000 mon.a (mon.0) 1286 : cluster [DBG] osdmap e324: 8 total, 8 up, 8 in 2026-03-09T21:22:49.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:49 vm07 bash[20771]: cluster 2026-03-09T21:22:49.208658+0000 mon.a (mon.0) 1287 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-09T21:22:49.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:49 vm07 bash[20771]: cluster 2026-03-09T21:22:49.208658+0000 mon.a (mon.0) 1287 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-09T21:22:49.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:49 vm07 bash[28052]: cluster 2026-03-09T21:22:48.185696+0000 mon.a (mon.0) 1284 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 161/597 objects degraded (26.968%), 30 pgs degraded) 2026-03-09T21:22:49.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:49 vm07 bash[28052]: cluster 2026-03-09T21:22:48.185696+0000 mon.a (mon.0) 1284 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 161/597 objects degraded (26.968%), 30 pgs degraded) 2026-03-09T21:22:49.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:49 vm07 bash[28052]: audit 2026-03-09T21:22:48.197283+0000 mon.a (mon.0) 1285 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:49.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:49 vm07 bash[28052]: audit 2026-03-09T21:22:48.197283+0000 mon.a (mon.0) 1285 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:49.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:49 vm07 bash[28052]: cluster 2026-03-09T21:22:48.209726+0000 mon.a (mon.0) 1286 : cluster [DBG] osdmap e324: 8 total, 8 up, 8 in 2026-03-09T21:22:49.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:49 vm07 bash[28052]: cluster 2026-03-09T21:22:48.209726+0000 mon.a (mon.0) 1286 : cluster [DBG] osdmap e324: 8 total, 8 up, 8 in 2026-03-09T21:22:49.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:49 vm07 bash[28052]: cluster 2026-03-09T21:22:49.208658+0000 mon.a (mon.0) 1287 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-09T21:22:49.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:49 vm07 bash[28052]: cluster 2026-03-09T21:22:49.208658+0000 mon.a (mon.0) 1287 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-09T21:22:49.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:49 vm10 bash[23387]: cluster 2026-03-09T21:22:48.185696+0000 mon.a (mon.0) 1284 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 161/597 objects degraded (26.968%), 30 pgs degraded) 2026-03-09T21:22:49.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:49 vm10 bash[23387]: cluster 2026-03-09T21:22:48.185696+0000 mon.a (mon.0) 1284 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 161/597 objects degraded (26.968%), 30 pgs degraded) 2026-03-09T21:22:49.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:49 vm10 bash[23387]: audit 2026-03-09T21:22:48.197283+0000 mon.a (mon.0) 1285 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:49.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:49 vm10 bash[23387]: audit 2026-03-09T21:22:48.197283+0000 mon.a (mon.0) 1285 : audit [INF] from='client.? 192.168.123.107:0/3084302067' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:49.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:49 vm10 bash[23387]: cluster 2026-03-09T21:22:48.209726+0000 mon.a (mon.0) 1286 : cluster [DBG] osdmap e324: 8 total, 8 up, 8 in 2026-03-09T21:22:49.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:49 vm10 bash[23387]: cluster 2026-03-09T21:22:48.209726+0000 mon.a (mon.0) 1286 : cluster [DBG] osdmap e324: 8 total, 8 up, 8 in 2026-03-09T21:22:49.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:49 vm10 bash[23387]: cluster 2026-03-09T21:22:49.208658+0000 mon.a (mon.0) 1287 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-09T21:22:49.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:49 vm10 bash[23387]: cluster 2026-03-09T21:22:49.208658+0000 mon.a (mon.0) 1287 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-09T21:22:50.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:50 vm10 bash[23387]: cluster 2026-03-09T21:22:49.795723+0000 mgr.y (mgr.24416) 257 : cluster [DBG] pgmap v446: 164 pgs: 164 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 652 B/s rd, 0 op/s 2026-03-09T21:22:50.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:50 vm10 bash[23387]: cluster 2026-03-09T21:22:49.795723+0000 mgr.y (mgr.24416) 257 : cluster [DBG] pgmap v446: 164 pgs: 164 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 652 B/s rd, 0 op/s 2026-03-09T21:22:50.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:50 vm07 bash[20771]: cluster 2026-03-09T21:22:49.795723+0000 mgr.y (mgr.24416) 257 : cluster [DBG] pgmap v446: 164 pgs: 164 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 652 B/s rd, 0 op/s 2026-03-09T21:22:50.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:50 vm07 bash[20771]: cluster 2026-03-09T21:22:49.795723+0000 mgr.y (mgr.24416) 257 : cluster [DBG] pgmap v446: 164 pgs: 164 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 652 B/s rd, 0 op/s 2026-03-09T21:22:50.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:50 vm07 bash[28052]: cluster 2026-03-09T21:22:49.795723+0000 mgr.y (mgr.24416) 257 : cluster [DBG] pgmap v446: 164 pgs: 164 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 652 B/s rd, 0 op/s 2026-03-09T21:22:50.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:50 vm07 bash[28052]: cluster 2026-03-09T21:22:49.795723+0000 mgr.y (mgr.24416) 257 : cluster [DBG] pgmap v446: 164 pgs: 164 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 652 B/s rd, 0 op/s 2026-03-09T21:22:51.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:51 vm07 bash[20771]: cluster 2026-03-09T21:22:50.225449+0000 mon.a (mon.0) 1288 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:22:51.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:51 vm07 bash[20771]: cluster 2026-03-09T21:22:50.225449+0000 mon.a (mon.0) 1288 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:22:51.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:51 vm07 bash[20771]: cluster 2026-03-09T21:22:50.425942+0000 mon.a (mon.0) 1289 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-09T21:22:51.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:51 vm07 bash[20771]: cluster 2026-03-09T21:22:50.425942+0000 mon.a (mon.0) 1289 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-09T21:22:51.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:51 vm07 bash[28052]: cluster 2026-03-09T21:22:50.225449+0000 mon.a (mon.0) 1288 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:22:51.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:51 vm07 bash[28052]: cluster 2026-03-09T21:22:50.225449+0000 mon.a (mon.0) 1288 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:22:51.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:51 vm07 bash[28052]: cluster 2026-03-09T21:22:50.425942+0000 mon.a (mon.0) 1289 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-09T21:22:51.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:51 vm07 bash[28052]: cluster 2026-03-09T21:22:50.425942+0000 mon.a (mon.0) 1289 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-09T21:22:51.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:51 vm10 bash[23387]: cluster 2026-03-09T21:22:50.225449+0000 mon.a (mon.0) 1288 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:22:51.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:51 vm10 bash[23387]: cluster 2026-03-09T21:22:50.225449+0000 mon.a (mon.0) 1288 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:22:51.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:51 vm10 bash[23387]: cluster 2026-03-09T21:22:50.425942+0000 mon.a (mon.0) 1289 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-09T21:22:51.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:51 vm10 bash[23387]: cluster 2026-03-09T21:22:50.425942+0000 mon.a (mon.0) 1289 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-09T21:22:52.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:52 vm07 bash[20771]: cluster 2026-03-09T21:22:51.421407+0000 mon.a (mon.0) 1290 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-09T21:22:52.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:52 vm07 bash[20771]: cluster 2026-03-09T21:22:51.421407+0000 mon.a (mon.0) 1290 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-09T21:22:52.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:52 vm07 bash[20771]: audit 2026-03-09T21:22:51.486126+0000 mon.c (mon.2) 123 : audit [INF] from='client.? 192.168.123.107:0/694688727' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:52.919 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:52 vm07 bash[20771]: audit 2026-03-09T21:22:51.486126+0000 mon.c (mon.2) 123 : audit [INF] from='client.? 192.168.123.107:0/694688727' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:52.919 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:52 vm07 bash[20771]: audit 2026-03-09T21:22:51.486537+0000 mon.a (mon.0) 1291 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:52.919 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:52 vm07 bash[20771]: audit 2026-03-09T21:22:51.486537+0000 mon.a (mon.0) 1291 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:52.919 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:52 vm07 bash[20771]: cluster 2026-03-09T21:22:51.796004+0000 mgr.y (mgr.24416) 258 : cluster [DBG] pgmap v449: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:52.919 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:52 vm07 bash[20771]: cluster 2026-03-09T21:22:51.796004+0000 mgr.y (mgr.24416) 258 : cluster [DBG] pgmap v449: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:52.919 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:52 vm07 bash[20771]: audit 2026-03-09T21:22:52.421672+0000 mon.a (mon.0) 1292 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:52.919 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:52 vm07 bash[20771]: audit 2026-03-09T21:22:52.421672+0000 mon.a (mon.0) 1292 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:52.919 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:52 vm07 bash[20771]: cluster 2026-03-09T21:22:52.425820+0000 mon.a (mon.0) 1293 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-09T21:22:52.919 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:52 vm07 bash[20771]: cluster 2026-03-09T21:22:52.425820+0000 mon.a (mon.0) 1293 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-09T21:22:52.919 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:52 vm07 bash[28052]: cluster 2026-03-09T21:22:51.421407+0000 mon.a (mon.0) 1290 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-09T21:22:52.919 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:52 vm07 bash[28052]: cluster 2026-03-09T21:22:51.421407+0000 mon.a (mon.0) 1290 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-09T21:22:52.919 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:52 vm07 bash[28052]: audit 2026-03-09T21:22:51.486126+0000 mon.c (mon.2) 123 : audit [INF] from='client.? 192.168.123.107:0/694688727' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:52.919 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:52 vm07 bash[28052]: audit 2026-03-09T21:22:51.486126+0000 mon.c (mon.2) 123 : audit [INF] from='client.? 192.168.123.107:0/694688727' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:52.919 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:52 vm07 bash[28052]: audit 2026-03-09T21:22:51.486537+0000 mon.a (mon.0) 1291 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:52.919 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:52 vm07 bash[28052]: audit 2026-03-09T21:22:51.486537+0000 mon.a (mon.0) 1291 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:52.919 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:52 vm07 bash[28052]: cluster 2026-03-09T21:22:51.796004+0000 mgr.y (mgr.24416) 258 : cluster [DBG] pgmap v449: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:52.919 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:52 vm07 bash[28052]: cluster 2026-03-09T21:22:51.796004+0000 mgr.y (mgr.24416) 258 : cluster [DBG] pgmap v449: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:52.919 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:52 vm07 bash[28052]: audit 2026-03-09T21:22:52.421672+0000 mon.a (mon.0) 1292 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:52.919 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:52 vm07 bash[28052]: audit 2026-03-09T21:22:52.421672+0000 mon.a (mon.0) 1292 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:52.919 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:52 vm07 bash[28052]: cluster 2026-03-09T21:22:52.425820+0000 mon.a (mon.0) 1293 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-09T21:22:52.919 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:52 vm07 bash[28052]: cluster 2026-03-09T21:22:52.425820+0000 mon.a (mon.0) 1293 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-09T21:22:52.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:52 vm10 bash[23387]: cluster 2026-03-09T21:22:51.421407+0000 mon.a (mon.0) 1290 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-09T21:22:52.960 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:52 vm10 bash[23387]: cluster 2026-03-09T21:22:51.421407+0000 mon.a (mon.0) 1290 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-09T21:22:52.960 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:52 vm10 bash[23387]: audit 2026-03-09T21:22:51.486126+0000 mon.c (mon.2) 123 : audit [INF] from='client.? 192.168.123.107:0/694688727' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:52.960 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:52 vm10 bash[23387]: audit 2026-03-09T21:22:51.486126+0000 mon.c (mon.2) 123 : audit [INF] from='client.? 192.168.123.107:0/694688727' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:52.960 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:52 vm10 bash[23387]: audit 2026-03-09T21:22:51.486537+0000 mon.a (mon.0) 1291 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:52.960 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:52 vm10 bash[23387]: audit 2026-03-09T21:22:51.486537+0000 mon.a (mon.0) 1291 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:52.960 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:52 vm10 bash[23387]: cluster 2026-03-09T21:22:51.796004+0000 mgr.y (mgr.24416) 258 : cluster [DBG] pgmap v449: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:52.960 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:52 vm10 bash[23387]: cluster 2026-03-09T21:22:51.796004+0000 mgr.y (mgr.24416) 258 : cluster [DBG] pgmap v449: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:22:52.960 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:52 vm10 bash[23387]: audit 2026-03-09T21:22:52.421672+0000 mon.a (mon.0) 1292 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:52.960 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:52 vm10 bash[23387]: audit 2026-03-09T21:22:52.421672+0000 mon.a (mon.0) 1292 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:52.960 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:52 vm10 bash[23387]: cluster 2026-03-09T21:22:52.425820+0000 mon.a (mon.0) 1293 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-09T21:22:52.960 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:52 vm10 bash[23387]: cluster 2026-03-09T21:22:52.425820+0000 mon.a (mon.0) 1293 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-09T21:22:53.442 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_lock PASSED [ 80%] 2026-03-09T21:22:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:54 vm10 bash[23387]: cluster 2026-03-09T21:22:53.431959+0000 mon.a (mon.0) 1294 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-09T21:22:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:54 vm10 bash[23387]: cluster 2026-03-09T21:22:53.431959+0000 mon.a (mon.0) 1294 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-09T21:22:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:54 vm10 bash[23387]: cluster 2026-03-09T21:22:53.796407+0000 mgr.y (mgr.24416) 259 : cluster [DBG] pgmap v452: 164 pgs: 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:54.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:54 vm10 bash[23387]: cluster 2026-03-09T21:22:53.796407+0000 mgr.y (mgr.24416) 259 : cluster [DBG] pgmap v452: 164 pgs: 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:54.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:54 vm07 bash[20771]: cluster 2026-03-09T21:22:53.431959+0000 mon.a (mon.0) 1294 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-09T21:22:54.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:54 vm07 bash[20771]: cluster 2026-03-09T21:22:53.431959+0000 mon.a (mon.0) 1294 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-09T21:22:54.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:54 vm07 bash[20771]: cluster 2026-03-09T21:22:53.796407+0000 mgr.y (mgr.24416) 259 : cluster [DBG] pgmap v452: 164 pgs: 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:54.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:54 vm07 bash[20771]: cluster 2026-03-09T21:22:53.796407+0000 mgr.y (mgr.24416) 259 : cluster [DBG] pgmap v452: 164 pgs: 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:54.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:54 vm07 bash[28052]: cluster 2026-03-09T21:22:53.431959+0000 mon.a (mon.0) 1294 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-09T21:22:54.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:54 vm07 bash[28052]: cluster 2026-03-09T21:22:53.431959+0000 mon.a (mon.0) 1294 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-09T21:22:54.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:54 vm07 bash[28052]: cluster 2026-03-09T21:22:53.796407+0000 mgr.y (mgr.24416) 259 : cluster [DBG] pgmap v452: 164 pgs: 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:54.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:54 vm07 bash[28052]: cluster 2026-03-09T21:22:53.796407+0000 mgr.y (mgr.24416) 259 : cluster [DBG] pgmap v452: 164 pgs: 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:55.864 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:55 vm07 bash[20771]: cluster 2026-03-09T21:22:54.462883+0000 mon.a (mon.0) 1295 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-09T21:22:55.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:55 vm07 bash[20771]: cluster 2026-03-09T21:22:54.462883+0000 mon.a (mon.0) 1295 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-09T21:22:55.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:55 vm07 bash[28052]: cluster 2026-03-09T21:22:54.462883+0000 mon.a (mon.0) 1295 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-09T21:22:55.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:55 vm07 bash[28052]: cluster 2026-03-09T21:22:54.462883+0000 mon.a (mon.0) 1295 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-09T21:22:55.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:55 vm10 bash[23387]: cluster 2026-03-09T21:22:54.462883+0000 mon.a (mon.0) 1295 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-09T21:22:55.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:55 vm10 bash[23387]: cluster 2026-03-09T21:22:54.462883+0000 mon.a (mon.0) 1295 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-09T21:22:56.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:56 vm07 bash[20771]: cluster 2026-03-09T21:22:55.460816+0000 mon.a (mon.0) 1296 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-09T21:22:56.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:56 vm07 bash[20771]: cluster 2026-03-09T21:22:55.460816+0000 mon.a (mon.0) 1296 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-09T21:22:56.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:56 vm07 bash[20771]: audit 2026-03-09T21:22:55.725623+0000 mon.c (mon.2) 124 : audit [INF] from='client.? 192.168.123.107:0/3987201693' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:56.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:56 vm07 bash[20771]: audit 2026-03-09T21:22:55.725623+0000 mon.c (mon.2) 124 : audit [INF] from='client.? 192.168.123.107:0/3987201693' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:56.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:56 vm07 bash[20771]: audit 2026-03-09T21:22:55.726106+0000 mon.a (mon.0) 1297 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:56.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:56 vm07 bash[20771]: audit 2026-03-09T21:22:55.726106+0000 mon.a (mon.0) 1297 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:56.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:56 vm07 bash[20771]: cluster 2026-03-09T21:22:55.796775+0000 mgr.y (mgr.24416) 260 : cluster [DBG] pgmap v455: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:56.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:56 vm07 bash[20771]: cluster 2026-03-09T21:22:55.796775+0000 mgr.y (mgr.24416) 260 : cluster [DBG] pgmap v455: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:56.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:56 vm07 bash[20771]: audit 2026-03-09T21:22:56.447819+0000 mon.a (mon.0) 1298 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:56.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:56 vm07 bash[20771]: audit 2026-03-09T21:22:56.447819+0000 mon.a (mon.0) 1298 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:56.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:56 vm07 bash[20771]: cluster 2026-03-09T21:22:56.455551+0000 mon.a (mon.0) 1299 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-09T21:22:56.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:56 vm07 bash[20771]: cluster 2026-03-09T21:22:56.455551+0000 mon.a (mon.0) 1299 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-09T21:22:56.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:56 vm07 bash[28052]: cluster 2026-03-09T21:22:55.460816+0000 mon.a (mon.0) 1296 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-09T21:22:56.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:56 vm07 bash[28052]: cluster 2026-03-09T21:22:55.460816+0000 mon.a (mon.0) 1296 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-09T21:22:56.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:56 vm07 bash[28052]: audit 2026-03-09T21:22:55.725623+0000 mon.c (mon.2) 124 : audit [INF] from='client.? 192.168.123.107:0/3987201693' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:56.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:56 vm07 bash[28052]: audit 2026-03-09T21:22:55.725623+0000 mon.c (mon.2) 124 : audit [INF] from='client.? 192.168.123.107:0/3987201693' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:56.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:56 vm07 bash[28052]: audit 2026-03-09T21:22:55.726106+0000 mon.a (mon.0) 1297 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:56.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:56 vm07 bash[28052]: audit 2026-03-09T21:22:55.726106+0000 mon.a (mon.0) 1297 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:56.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:56 vm07 bash[28052]: cluster 2026-03-09T21:22:55.796775+0000 mgr.y (mgr.24416) 260 : cluster [DBG] pgmap v455: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:56.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:56 vm07 bash[28052]: cluster 2026-03-09T21:22:55.796775+0000 mgr.y (mgr.24416) 260 : cluster [DBG] pgmap v455: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:56.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:56 vm07 bash[28052]: audit 2026-03-09T21:22:56.447819+0000 mon.a (mon.0) 1298 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:56.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:56 vm07 bash[28052]: audit 2026-03-09T21:22:56.447819+0000 mon.a (mon.0) 1298 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:56.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:56 vm07 bash[28052]: cluster 2026-03-09T21:22:56.455551+0000 mon.a (mon.0) 1299 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-09T21:22:56.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:56 vm07 bash[28052]: cluster 2026-03-09T21:22:56.455551+0000 mon.a (mon.0) 1299 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-09T21:22:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:56 vm10 bash[23387]: cluster 2026-03-09T21:22:55.460816+0000 mon.a (mon.0) 1296 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-09T21:22:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:56 vm10 bash[23387]: cluster 2026-03-09T21:22:55.460816+0000 mon.a (mon.0) 1296 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-09T21:22:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:56 vm10 bash[23387]: audit 2026-03-09T21:22:55.725623+0000 mon.c (mon.2) 124 : audit [INF] from='client.? 192.168.123.107:0/3987201693' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:56 vm10 bash[23387]: audit 2026-03-09T21:22:55.725623+0000 mon.c (mon.2) 124 : audit [INF] from='client.? 192.168.123.107:0/3987201693' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:56 vm10 bash[23387]: audit 2026-03-09T21:22:55.726106+0000 mon.a (mon.0) 1297 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:56 vm10 bash[23387]: audit 2026-03-09T21:22:55.726106+0000 mon.a (mon.0) 1297 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:22:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:56 vm10 bash[23387]: cluster 2026-03-09T21:22:55.796775+0000 mgr.y (mgr.24416) 260 : cluster [DBG] pgmap v455: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:56 vm10 bash[23387]: cluster 2026-03-09T21:22:55.796775+0000 mgr.y (mgr.24416) 260 : cluster [DBG] pgmap v455: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:56 vm10 bash[23387]: audit 2026-03-09T21:22:56.447819+0000 mon.a (mon.0) 1298 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:56 vm10 bash[23387]: audit 2026-03-09T21:22:56.447819+0000 mon.a (mon.0) 1298 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:22:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:56 vm10 bash[23387]: cluster 2026-03-09T21:22:56.455551+0000 mon.a (mon.0) 1299 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-09T21:22:56.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:56 vm10 bash[23387]: cluster 2026-03-09T21:22:56.455551+0000 mon.a (mon.0) 1299 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-09T21:22:56.942 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:22:56 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:22:57.459 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_execute PASSED [ 81%] 2026-03-09T21:22:57.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:57 vm07 bash[20771]: audit 2026-03-09T21:22:56.473396+0000 mgr.y (mgr.24416) 261 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:57.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:57 vm07 bash[20771]: audit 2026-03-09T21:22:56.473396+0000 mgr.y (mgr.24416) 261 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:57.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:57 vm07 bash[20771]: cluster 2026-03-09T21:22:56.485163+0000 mon.a (mon.0) 1300 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:22:57.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:57 vm07 bash[20771]: cluster 2026-03-09T21:22:56.485163+0000 mon.a (mon.0) 1300 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:22:57.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:57 vm07 bash[20771]: audit 2026-03-09T21:22:57.119605+0000 mon.a (mon.0) 1301 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:22:57.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:57 vm07 bash[20771]: audit 2026-03-09T21:22:57.119605+0000 mon.a (mon.0) 1301 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:22:57.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:57 vm07 bash[20771]: audit 2026-03-09T21:22:57.120275+0000 mon.c (mon.2) 125 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:22:57.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:57 vm07 bash[20771]: audit 2026-03-09T21:22:57.120275+0000 mon.c (mon.2) 125 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:22:57.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:57 vm07 bash[20771]: cluster 2026-03-09T21:22:57.453902+0000 mon.a (mon.0) 1302 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-09T21:22:57.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:57 vm07 bash[20771]: cluster 2026-03-09T21:22:57.453902+0000 mon.a (mon.0) 1302 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-09T21:22:57.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:57 vm07 bash[28052]: audit 2026-03-09T21:22:56.473396+0000 mgr.y (mgr.24416) 261 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:57.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:57 vm07 bash[28052]: audit 2026-03-09T21:22:56.473396+0000 mgr.y (mgr.24416) 261 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:57.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:57 vm07 bash[28052]: cluster 2026-03-09T21:22:56.485163+0000 mon.a (mon.0) 1300 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:22:57.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:57 vm07 bash[28052]: cluster 2026-03-09T21:22:56.485163+0000 mon.a (mon.0) 1300 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:22:57.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:57 vm07 bash[28052]: audit 2026-03-09T21:22:57.119605+0000 mon.a (mon.0) 1301 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:22:57.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:57 vm07 bash[28052]: audit 2026-03-09T21:22:57.119605+0000 mon.a (mon.0) 1301 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:22:57.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:57 vm07 bash[28052]: audit 2026-03-09T21:22:57.120275+0000 mon.c (mon.2) 125 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:22:57.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:57 vm07 bash[28052]: audit 2026-03-09T21:22:57.120275+0000 mon.c (mon.2) 125 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:22:57.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:57 vm07 bash[28052]: cluster 2026-03-09T21:22:57.453902+0000 mon.a (mon.0) 1302 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-09T21:22:57.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:57 vm07 bash[28052]: cluster 2026-03-09T21:22:57.453902+0000 mon.a (mon.0) 1302 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-09T21:22:57.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:57 vm10 bash[23387]: audit 2026-03-09T21:22:56.473396+0000 mgr.y (mgr.24416) 261 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:57.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:57 vm10 bash[23387]: audit 2026-03-09T21:22:56.473396+0000 mgr.y (mgr.24416) 261 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:22:57.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:57 vm10 bash[23387]: cluster 2026-03-09T21:22:56.485163+0000 mon.a (mon.0) 1300 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:22:57.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:57 vm10 bash[23387]: cluster 2026-03-09T21:22:56.485163+0000 mon.a (mon.0) 1300 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:22:57.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:57 vm10 bash[23387]: audit 2026-03-09T21:22:57.119605+0000 mon.a (mon.0) 1301 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:22:57.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:57 vm10 bash[23387]: audit 2026-03-09T21:22:57.119605+0000 mon.a (mon.0) 1301 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:22:57.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:57 vm10 bash[23387]: audit 2026-03-09T21:22:57.120275+0000 mon.c (mon.2) 125 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:22:57.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:57 vm10 bash[23387]: audit 2026-03-09T21:22:57.120275+0000 mon.c (mon.2) 125 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:22:57.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:57 vm10 bash[23387]: cluster 2026-03-09T21:22:57.453902+0000 mon.a (mon.0) 1302 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-09T21:22:57.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:57 vm10 bash[23387]: cluster 2026-03-09T21:22:57.453902+0000 mon.a (mon.0) 1302 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-09T21:22:58.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:58 vm07 bash[20771]: cluster 2026-03-09T21:22:57.797182+0000 mgr.y (mgr.24416) 262 : cluster [DBG] pgmap v458: 164 pgs: 164 active+clean; 455 KiB data, 469 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:58.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:58 vm07 bash[20771]: cluster 2026-03-09T21:22:57.797182+0000 mgr.y (mgr.24416) 262 : cluster [DBG] pgmap v458: 164 pgs: 164 active+clean; 455 KiB data, 469 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:58.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:58 vm07 bash[20771]: cluster 2026-03-09T21:22:58.470816+0000 mon.a (mon.0) 1303 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-09T21:22:58.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:22:58 vm07 bash[20771]: cluster 2026-03-09T21:22:58.470816+0000 mon.a (mon.0) 1303 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-09T21:22:58.865 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:22:58 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:22:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:22:58.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:58 vm07 bash[28052]: cluster 2026-03-09T21:22:57.797182+0000 mgr.y (mgr.24416) 262 : cluster [DBG] pgmap v458: 164 pgs: 164 active+clean; 455 KiB data, 469 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:58.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:58 vm07 bash[28052]: cluster 2026-03-09T21:22:57.797182+0000 mgr.y (mgr.24416) 262 : cluster [DBG] pgmap v458: 164 pgs: 164 active+clean; 455 KiB data, 469 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:58.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:58 vm07 bash[28052]: cluster 2026-03-09T21:22:58.470816+0000 mon.a (mon.0) 1303 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-09T21:22:58.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:22:58 vm07 bash[28052]: cluster 2026-03-09T21:22:58.470816+0000 mon.a (mon.0) 1303 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-09T21:22:58.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:58 vm10 bash[23387]: cluster 2026-03-09T21:22:57.797182+0000 mgr.y (mgr.24416) 262 : cluster [DBG] pgmap v458: 164 pgs: 164 active+clean; 455 KiB data, 469 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:58.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:58 vm10 bash[23387]: cluster 2026-03-09T21:22:57.797182+0000 mgr.y (mgr.24416) 262 : cluster [DBG] pgmap v458: 164 pgs: 164 active+clean; 455 KiB data, 469 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:22:58.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:58 vm10 bash[23387]: cluster 2026-03-09T21:22:58.470816+0000 mon.a (mon.0) 1303 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-09T21:22:58.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:22:58 vm10 bash[23387]: cluster 2026-03-09T21:22:58.470816+0000 mon.a (mon.0) 1303 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-09T21:23:00.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:00 vm07 bash[20771]: cluster 2026-03-09T21:22:59.470421+0000 mon.a (mon.0) 1304 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-09T21:23:00.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:00 vm07 bash[20771]: cluster 2026-03-09T21:22:59.470421+0000 mon.a (mon.0) 1304 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-09T21:23:00.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:00 vm07 bash[20771]: audit 2026-03-09T21:22:59.523728+0000 mon.c (mon.2) 126 : audit [INF] from='client.? 192.168.123.107:0/2450013632' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:00.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:00 vm07 bash[20771]: audit 2026-03-09T21:22:59.523728+0000 mon.c (mon.2) 126 : audit [INF] from='client.? 192.168.123.107:0/2450013632' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:00.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:00 vm07 bash[20771]: audit 2026-03-09T21:22:59.524110+0000 mon.a (mon.0) 1305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:00.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:00 vm07 bash[20771]: audit 2026-03-09T21:22:59.524110+0000 mon.a (mon.0) 1305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:00.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:00 vm07 bash[20771]: cluster 2026-03-09T21:22:59.797457+0000 mgr.y (mgr.24416) 263 : cluster [DBG] pgmap v461: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 469 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:00.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:00 vm07 bash[20771]: cluster 2026-03-09T21:22:59.797457+0000 mgr.y (mgr.24416) 263 : cluster [DBG] pgmap v461: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 469 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:00.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:00 vm07 bash[28052]: cluster 2026-03-09T21:22:59.470421+0000 mon.a (mon.0) 1304 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-09T21:23:00.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:00 vm07 bash[28052]: cluster 2026-03-09T21:22:59.470421+0000 mon.a (mon.0) 1304 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-09T21:23:00.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:00 vm07 bash[28052]: audit 2026-03-09T21:22:59.523728+0000 mon.c (mon.2) 126 : audit [INF] from='client.? 192.168.123.107:0/2450013632' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:00.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:00 vm07 bash[28052]: audit 2026-03-09T21:22:59.523728+0000 mon.c (mon.2) 126 : audit [INF] from='client.? 192.168.123.107:0/2450013632' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:00.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:00 vm07 bash[28052]: audit 2026-03-09T21:22:59.524110+0000 mon.a (mon.0) 1305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:00.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:00 vm07 bash[28052]: audit 2026-03-09T21:22:59.524110+0000 mon.a (mon.0) 1305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:00.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:00 vm07 bash[28052]: cluster 2026-03-09T21:22:59.797457+0000 mgr.y (mgr.24416) 263 : cluster [DBG] pgmap v461: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 469 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:00.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:00 vm07 bash[28052]: cluster 2026-03-09T21:22:59.797457+0000 mgr.y (mgr.24416) 263 : cluster [DBG] pgmap v461: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 469 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:00.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:00 vm10 bash[23387]: cluster 2026-03-09T21:22:59.470421+0000 mon.a (mon.0) 1304 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-09T21:23:00.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:00 vm10 bash[23387]: cluster 2026-03-09T21:22:59.470421+0000 mon.a (mon.0) 1304 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-09T21:23:00.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:00 vm10 bash[23387]: audit 2026-03-09T21:22:59.523728+0000 mon.c (mon.2) 126 : audit [INF] from='client.? 192.168.123.107:0/2450013632' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:00.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:00 vm10 bash[23387]: audit 2026-03-09T21:22:59.523728+0000 mon.c (mon.2) 126 : audit [INF] from='client.? 192.168.123.107:0/2450013632' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:00.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:00 vm10 bash[23387]: audit 2026-03-09T21:22:59.524110+0000 mon.a (mon.0) 1305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:00.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:00 vm10 bash[23387]: audit 2026-03-09T21:22:59.524110+0000 mon.a (mon.0) 1305 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:00.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:00 vm10 bash[23387]: cluster 2026-03-09T21:22:59.797457+0000 mgr.y (mgr.24416) 263 : cluster [DBG] pgmap v461: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 469 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:00.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:00 vm10 bash[23387]: cluster 2026-03-09T21:22:59.797457+0000 mgr.y (mgr.24416) 263 : cluster [DBG] pgmap v461: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 469 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:01.877 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_execute PASSED [ 82%] 2026-03-09T21:23:02.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:01 vm07 bash[20771]: audit 2026-03-09T21:23:00.543734+0000 mon.a (mon.0) 1306 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:02.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:01 vm07 bash[20771]: audit 2026-03-09T21:23:00.543734+0000 mon.a (mon.0) 1306 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:02.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:01 vm07 bash[20771]: cluster 2026-03-09T21:23:00.555045+0000 mon.a (mon.0) 1307 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-09T21:23:02.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:01 vm07 bash[20771]: cluster 2026-03-09T21:23:00.555045+0000 mon.a (mon.0) 1307 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-09T21:23:02.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:01 vm07 bash[28052]: audit 2026-03-09T21:23:00.543734+0000 mon.a (mon.0) 1306 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:02.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:01 vm07 bash[28052]: audit 2026-03-09T21:23:00.543734+0000 mon.a (mon.0) 1306 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:02.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:01 vm07 bash[28052]: cluster 2026-03-09T21:23:00.555045+0000 mon.a (mon.0) 1307 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-09T21:23:02.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:01 vm07 bash[28052]: cluster 2026-03-09T21:23:00.555045+0000 mon.a (mon.0) 1307 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-09T21:23:02.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:01 vm10 bash[23387]: audit 2026-03-09T21:23:00.543734+0000 mon.a (mon.0) 1306 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:02.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:01 vm10 bash[23387]: audit 2026-03-09T21:23:00.543734+0000 mon.a (mon.0) 1306 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:02.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:01 vm10 bash[23387]: cluster 2026-03-09T21:23:00.555045+0000 mon.a (mon.0) 1307 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-09T21:23:02.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:01 vm10 bash[23387]: cluster 2026-03-09T21:23:00.555045+0000 mon.a (mon.0) 1307 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-09T21:23:03.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:02 vm07 bash[20771]: cluster 2026-03-09T21:23:01.797728+0000 mgr.y (mgr.24416) 264 : cluster [DBG] pgmap v463: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 469 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:03.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:02 vm07 bash[20771]: cluster 2026-03-09T21:23:01.797728+0000 mgr.y (mgr.24416) 264 : cluster [DBG] pgmap v463: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 469 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:03.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:02 vm07 bash[20771]: cluster 2026-03-09T21:23:01.864763+0000 mon.a (mon.0) 1308 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-09T21:23:03.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:02 vm07 bash[20771]: cluster 2026-03-09T21:23:01.864763+0000 mon.a (mon.0) 1308 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-09T21:23:03.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:02 vm07 bash[28052]: cluster 2026-03-09T21:23:01.797728+0000 mgr.y (mgr.24416) 264 : cluster [DBG] pgmap v463: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 469 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:03.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:02 vm07 bash[28052]: cluster 2026-03-09T21:23:01.797728+0000 mgr.y (mgr.24416) 264 : cluster [DBG] pgmap v463: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 469 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:03.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:02 vm07 bash[28052]: cluster 2026-03-09T21:23:01.864763+0000 mon.a (mon.0) 1308 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-09T21:23:03.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:02 vm07 bash[28052]: cluster 2026-03-09T21:23:01.864763+0000 mon.a (mon.0) 1308 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-09T21:23:03.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:02 vm10 bash[23387]: cluster 2026-03-09T21:23:01.797728+0000 mgr.y (mgr.24416) 264 : cluster [DBG] pgmap v463: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 469 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:03.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:02 vm10 bash[23387]: cluster 2026-03-09T21:23:01.797728+0000 mgr.y (mgr.24416) 264 : cluster [DBG] pgmap v463: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 469 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:03.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:02 vm10 bash[23387]: cluster 2026-03-09T21:23:01.864763+0000 mon.a (mon.0) 1308 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-09T21:23:03.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:02 vm10 bash[23387]: cluster 2026-03-09T21:23:01.864763+0000 mon.a (mon.0) 1308 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-09T21:23:04.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:03 vm10 bash[23387]: cluster 2026-03-09T21:23:02.856043+0000 mon.a (mon.0) 1309 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-09T21:23:04.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:03 vm10 bash[23387]: cluster 2026-03-09T21:23:02.856043+0000 mon.a (mon.0) 1309 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-09T21:23:04.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:03 vm07 bash[20771]: cluster 2026-03-09T21:23:02.856043+0000 mon.a (mon.0) 1309 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-09T21:23:04.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:03 vm07 bash[20771]: cluster 2026-03-09T21:23:02.856043+0000 mon.a (mon.0) 1309 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-09T21:23:04.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:03 vm07 bash[28052]: cluster 2026-03-09T21:23:02.856043+0000 mon.a (mon.0) 1309 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-09T21:23:04.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:03 vm07 bash[28052]: cluster 2026-03-09T21:23:02.856043+0000 mon.a (mon.0) 1309 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-09T21:23:05.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:05 vm07 bash[20771]: cluster 2026-03-09T21:23:03.798035+0000 mgr.y (mgr.24416) 265 : cluster [DBG] pgmap v466: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:05.373 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:05 vm07 bash[20771]: cluster 2026-03-09T21:23:03.798035+0000 mgr.y (mgr.24416) 265 : cluster [DBG] pgmap v466: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:05.373 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:05 vm07 bash[20771]: cluster 2026-03-09T21:23:03.860220+0000 mon.a (mon.0) 1310 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-09T21:23:05.373 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:05 vm07 bash[20771]: cluster 2026-03-09T21:23:03.860220+0000 mon.a (mon.0) 1310 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-09T21:23:05.373 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:05 vm07 bash[20771]: audit 2026-03-09T21:23:03.889613+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.107:0/622814874' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:05.373 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:05 vm07 bash[20771]: audit 2026-03-09T21:23:03.889613+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.107:0/622814874' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:05.373 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:05 vm07 bash[20771]: audit 2026-03-09T21:23:03.895830+0000 mon.a (mon.0) 1311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:05.373 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:05 vm07 bash[20771]: audit 2026-03-09T21:23:03.895830+0000 mon.a (mon.0) 1311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:05.373 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:05 vm07 bash[28052]: cluster 2026-03-09T21:23:03.798035+0000 mgr.y (mgr.24416) 265 : cluster [DBG] pgmap v466: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:05.373 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:05 vm07 bash[28052]: cluster 2026-03-09T21:23:03.798035+0000 mgr.y (mgr.24416) 265 : cluster [DBG] pgmap v466: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:05.373 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:05 vm07 bash[28052]: cluster 2026-03-09T21:23:03.860220+0000 mon.a (mon.0) 1310 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-09T21:23:05.373 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:05 vm07 bash[28052]: cluster 2026-03-09T21:23:03.860220+0000 mon.a (mon.0) 1310 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-09T21:23:05.373 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:05 vm07 bash[28052]: audit 2026-03-09T21:23:03.889613+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.107:0/622814874' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:05.373 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:05 vm07 bash[28052]: audit 2026-03-09T21:23:03.889613+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.107:0/622814874' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:05.373 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:05 vm07 bash[28052]: audit 2026-03-09T21:23:03.895830+0000 mon.a (mon.0) 1311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:05.373 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:05 vm07 bash[28052]: audit 2026-03-09T21:23:03.895830+0000 mon.a (mon.0) 1311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:05.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:05 vm10 bash[23387]: cluster 2026-03-09T21:23:03.798035+0000 mgr.y (mgr.24416) 265 : cluster [DBG] pgmap v466: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:05.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:05 vm10 bash[23387]: cluster 2026-03-09T21:23:03.798035+0000 mgr.y (mgr.24416) 265 : cluster [DBG] pgmap v466: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:05.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:05 vm10 bash[23387]: cluster 2026-03-09T21:23:03.860220+0000 mon.a (mon.0) 1310 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-09T21:23:05.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:05 vm10 bash[23387]: cluster 2026-03-09T21:23:03.860220+0000 mon.a (mon.0) 1310 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-09T21:23:05.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:05 vm10 bash[23387]: audit 2026-03-09T21:23:03.889613+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.107:0/622814874' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:05.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:05 vm10 bash[23387]: audit 2026-03-09T21:23:03.889613+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.107:0/622814874' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:05.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:05 vm10 bash[23387]: audit 2026-03-09T21:23:03.895830+0000 mon.a (mon.0) 1311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:05.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:05 vm10 bash[23387]: audit 2026-03-09T21:23:03.895830+0000 mon.a (mon.0) 1311 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:06.052 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_setxattr PASSED [ 83%] 2026-03-09T21:23:06.484 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:06 vm10 bash[23387]: audit 2026-03-09T21:23:05.030630+0000 mon.a (mon.0) 1312 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:06.484 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:06 vm10 bash[23387]: audit 2026-03-09T21:23:05.030630+0000 mon.a (mon.0) 1312 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:06.484 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:06 vm10 bash[23387]: cluster 2026-03-09T21:23:05.034625+0000 mon.a (mon.0) 1313 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-09T21:23:06.484 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:06 vm10 bash[23387]: cluster 2026-03-09T21:23:05.034625+0000 mon.a (mon.0) 1313 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-09T21:23:06.484 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:06 vm10 bash[23387]: cluster 2026-03-09T21:23:05.798319+0000 mgr.y (mgr.24416) 266 : cluster [DBG] pgmap v469: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:06.484 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:06 vm10 bash[23387]: cluster 2026-03-09T21:23:05.798319+0000 mgr.y (mgr.24416) 266 : cluster [DBG] pgmap v469: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:06.484 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:06 vm10 bash[23387]: cluster 2026-03-09T21:23:06.043310+0000 mon.a (mon.0) 1314 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-09T21:23:06.484 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:06 vm10 bash[23387]: cluster 2026-03-09T21:23:06.043310+0000 mon.a (mon.0) 1314 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-09T21:23:06.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:06 vm07 bash[20771]: audit 2026-03-09T21:23:05.030630+0000 mon.a (mon.0) 1312 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:06.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:06 vm07 bash[20771]: audit 2026-03-09T21:23:05.030630+0000 mon.a (mon.0) 1312 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:06.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:06 vm07 bash[20771]: cluster 2026-03-09T21:23:05.034625+0000 mon.a (mon.0) 1313 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-09T21:23:06.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:06 vm07 bash[20771]: cluster 2026-03-09T21:23:05.034625+0000 mon.a (mon.0) 1313 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-09T21:23:06.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:06 vm07 bash[20771]: cluster 2026-03-09T21:23:05.798319+0000 mgr.y (mgr.24416) 266 : cluster [DBG] pgmap v469: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:06.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:06 vm07 bash[20771]: cluster 2026-03-09T21:23:05.798319+0000 mgr.y (mgr.24416) 266 : cluster [DBG] pgmap v469: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:06.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:06 vm07 bash[20771]: cluster 2026-03-09T21:23:06.043310+0000 mon.a (mon.0) 1314 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-09T21:23:06.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:06 vm07 bash[20771]: cluster 2026-03-09T21:23:06.043310+0000 mon.a (mon.0) 1314 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-09T21:23:06.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:06 vm07 bash[28052]: audit 2026-03-09T21:23:05.030630+0000 mon.a (mon.0) 1312 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:06.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:06 vm07 bash[28052]: audit 2026-03-09T21:23:05.030630+0000 mon.a (mon.0) 1312 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:06.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:06 vm07 bash[28052]: cluster 2026-03-09T21:23:05.034625+0000 mon.a (mon.0) 1313 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-09T21:23:06.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:06 vm07 bash[28052]: cluster 2026-03-09T21:23:05.034625+0000 mon.a (mon.0) 1313 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-09T21:23:06.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:06 vm07 bash[28052]: cluster 2026-03-09T21:23:05.798319+0000 mgr.y (mgr.24416) 266 : cluster [DBG] pgmap v469: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:06.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:06 vm07 bash[28052]: cluster 2026-03-09T21:23:05.798319+0000 mgr.y (mgr.24416) 266 : cluster [DBG] pgmap v469: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:06.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:06 vm07 bash[28052]: cluster 2026-03-09T21:23:06.043310+0000 mon.a (mon.0) 1314 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-09T21:23:06.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:06 vm07 bash[28052]: cluster 2026-03-09T21:23:06.043310+0000 mon.a (mon.0) 1314 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-09T21:23:06.942 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:23:06 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:23:07.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:07 vm07 bash[20771]: audit 2026-03-09T21:23:06.484201+0000 mgr.y (mgr.24416) 267 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:07.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:07 vm07 bash[20771]: audit 2026-03-09T21:23:06.484201+0000 mgr.y (mgr.24416) 267 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:07.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:07 vm07 bash[28052]: audit 2026-03-09T21:23:06.484201+0000 mgr.y (mgr.24416) 267 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:07.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:07 vm07 bash[28052]: audit 2026-03-09T21:23:06.484201+0000 mgr.y (mgr.24416) 267 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:07.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:07 vm10 bash[23387]: audit 2026-03-09T21:23:06.484201+0000 mgr.y (mgr.24416) 267 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:07.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:07 vm10 bash[23387]: audit 2026-03-09T21:23:06.484201+0000 mgr.y (mgr.24416) 267 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:08.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:08 vm07 bash[20771]: cluster 2026-03-09T21:23:07.187877+0000 mon.a (mon.0) 1315 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-09T21:23:08.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:08 vm07 bash[20771]: cluster 2026-03-09T21:23:07.187877+0000 mon.a (mon.0) 1315 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-09T21:23:08.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:08 vm07 bash[20771]: audit 2026-03-09T21:23:07.198384+0000 mon.a (mon.0) 1316 : audit [DBG] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T21:23:08.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:08 vm07 bash[20771]: audit 2026-03-09T21:23:07.198384+0000 mon.a (mon.0) 1316 : audit [DBG] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T21:23:08.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:08 vm07 bash[20771]: audit 2026-03-09T21:23:07.200535+0000 mon.a (mon.0) 1317 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-09T21:23:08.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:08 vm07 bash[20771]: audit 2026-03-09T21:23:07.200535+0000 mon.a (mon.0) 1317 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-09T21:23:08.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:08 vm07 bash[20771]: cluster 2026-03-09T21:23:07.799130+0000 mgr.y (mgr.24416) 268 : cluster [DBG] pgmap v472: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:08.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:08 vm07 bash[20771]: cluster 2026-03-09T21:23:07.799130+0000 mgr.y (mgr.24416) 268 : cluster [DBG] pgmap v472: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:08.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:08 vm07 bash[20771]: audit 2026-03-09T21:23:08.154867+0000 mon.a (mon.0) 1318 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]': finished 2026-03-09T21:23:08.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:08 vm07 bash[20771]: audit 2026-03-09T21:23:08.154867+0000 mon.a (mon.0) 1318 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]': finished 2026-03-09T21:23:08.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:08 vm07 bash[20771]: cluster 2026-03-09T21:23:08.159495+0000 mon.a (mon.0) 1319 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-09T21:23:08.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:08 vm07 bash[20771]: cluster 2026-03-09T21:23:08.159495+0000 mon.a (mon.0) 1319 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-09T21:23:08.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:08 vm07 bash[20771]: audit 2026-03-09T21:23:08.197057+0000 mon.a (mon.0) 1320 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2"}]: dispatch 2026-03-09T21:23:08.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:08 vm07 bash[20771]: audit 2026-03-09T21:23:08.197057+0000 mon.a (mon.0) 1320 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2"}]: dispatch 2026-03-09T21:23:08.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:08 vm07 bash[20771]: audit 2026-03-09T21:23:08.197682+0000 mon.a (mon.0) 1321 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T21:23:08.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:08 vm07 bash[20771]: audit 2026-03-09T21:23:08.197682+0000 mon.a (mon.0) 1321 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T21:23:08.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:08 vm07 bash[28052]: cluster 2026-03-09T21:23:07.187877+0000 mon.a (mon.0) 1315 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-09T21:23:08.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:08 vm07 bash[28052]: cluster 2026-03-09T21:23:07.187877+0000 mon.a (mon.0) 1315 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-09T21:23:08.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:08 vm07 bash[28052]: audit 2026-03-09T21:23:07.198384+0000 mon.a (mon.0) 1316 : audit [DBG] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T21:23:08.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:08 vm07 bash[28052]: audit 2026-03-09T21:23:07.198384+0000 mon.a (mon.0) 1316 : audit [DBG] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T21:23:08.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:08 vm07 bash[28052]: audit 2026-03-09T21:23:07.200535+0000 mon.a (mon.0) 1317 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-09T21:23:08.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:08 vm07 bash[28052]: audit 2026-03-09T21:23:07.200535+0000 mon.a (mon.0) 1317 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-09T21:23:08.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:08 vm07 bash[28052]: cluster 2026-03-09T21:23:07.799130+0000 mgr.y (mgr.24416) 268 : cluster [DBG] pgmap v472: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:08.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:08 vm07 bash[28052]: cluster 2026-03-09T21:23:07.799130+0000 mgr.y (mgr.24416) 268 : cluster [DBG] pgmap v472: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:08.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:08 vm07 bash[28052]: audit 2026-03-09T21:23:08.154867+0000 mon.a (mon.0) 1318 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]': finished 2026-03-09T21:23:08.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:08 vm07 bash[28052]: audit 2026-03-09T21:23:08.154867+0000 mon.a (mon.0) 1318 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]': finished 2026-03-09T21:23:08.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:08 vm07 bash[28052]: cluster 2026-03-09T21:23:08.159495+0000 mon.a (mon.0) 1319 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-09T21:23:08.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:08 vm07 bash[28052]: cluster 2026-03-09T21:23:08.159495+0000 mon.a (mon.0) 1319 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-09T21:23:08.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:08 vm07 bash[28052]: audit 2026-03-09T21:23:08.197057+0000 mon.a (mon.0) 1320 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2"}]: dispatch 2026-03-09T21:23:08.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:08 vm07 bash[28052]: audit 2026-03-09T21:23:08.197057+0000 mon.a (mon.0) 1320 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2"}]: dispatch 2026-03-09T21:23:08.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:08 vm07 bash[28052]: audit 2026-03-09T21:23:08.197682+0000 mon.a (mon.0) 1321 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T21:23:08.616 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:08 vm07 bash[28052]: audit 2026-03-09T21:23:08.197682+0000 mon.a (mon.0) 1321 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T21:23:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:08 vm10 bash[23387]: cluster 2026-03-09T21:23:07.187877+0000 mon.a (mon.0) 1315 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-09T21:23:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:08 vm10 bash[23387]: cluster 2026-03-09T21:23:07.187877+0000 mon.a (mon.0) 1315 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-09T21:23:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:08 vm10 bash[23387]: audit 2026-03-09T21:23:07.198384+0000 mon.a (mon.0) 1316 : audit [DBG] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T21:23:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:08 vm10 bash[23387]: audit 2026-03-09T21:23:07.198384+0000 mon.a (mon.0) 1316 : audit [DBG] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T21:23:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:08 vm10 bash[23387]: audit 2026-03-09T21:23:07.200535+0000 mon.a (mon.0) 1317 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-09T21:23:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:08 vm10 bash[23387]: audit 2026-03-09T21:23:07.200535+0000 mon.a (mon.0) 1317 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-09T21:23:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:08 vm10 bash[23387]: cluster 2026-03-09T21:23:07.799130+0000 mgr.y (mgr.24416) 268 : cluster [DBG] pgmap v472: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:08 vm10 bash[23387]: cluster 2026-03-09T21:23:07.799130+0000 mgr.y (mgr.24416) 268 : cluster [DBG] pgmap v472: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:08 vm10 bash[23387]: audit 2026-03-09T21:23:08.154867+0000 mon.a (mon.0) 1318 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]': finished 2026-03-09T21:23:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:08 vm10 bash[23387]: audit 2026-03-09T21:23:08.154867+0000 mon.a (mon.0) 1318 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]': finished 2026-03-09T21:23:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:08 vm10 bash[23387]: cluster 2026-03-09T21:23:08.159495+0000 mon.a (mon.0) 1319 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-09T21:23:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:08 vm10 bash[23387]: cluster 2026-03-09T21:23:08.159495+0000 mon.a (mon.0) 1319 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-09T21:23:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:08 vm10 bash[23387]: audit 2026-03-09T21:23:08.197057+0000 mon.a (mon.0) 1320 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2"}]: dispatch 2026-03-09T21:23:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:08 vm10 bash[23387]: audit 2026-03-09T21:23:08.197057+0000 mon.a (mon.0) 1320 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2"}]: dispatch 2026-03-09T21:23:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:08 vm10 bash[23387]: audit 2026-03-09T21:23:08.197682+0000 mon.a (mon.0) 1321 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T21:23:08.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:08 vm10 bash[23387]: audit 2026-03-09T21:23:08.197682+0000 mon.a (mon.0) 1321 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T21:23:09.114 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:23:08 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:23:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:23:10.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:10 vm10 bash[23387]: audit 2026-03-09T21:23:09.158048+0000 mon.a (mon.0) 1322 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T21:23:10.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:10 vm10 bash[23387]: audit 2026-03-09T21:23:09.158048+0000 mon.a (mon.0) 1322 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T21:23:10.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:10 vm10 bash[23387]: cluster 2026-03-09T21:23:09.167775+0000 mon.a (mon.0) 1323 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-09T21:23:10.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:10 vm10 bash[23387]: cluster 2026-03-09T21:23:09.167775+0000 mon.a (mon.0) 1323 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-09T21:23:10.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:10 vm10 bash[23387]: audit 2026-03-09T21:23:09.169751+0000 mon.a (mon.0) 1324 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"dne","key":"key","value":"key"}]: dispatch 2026-03-09T21:23:10.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:10 vm10 bash[23387]: audit 2026-03-09T21:23:09.169751+0000 mon.a (mon.0) 1324 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"dne","key":"key","value":"key"}]: dispatch 2026-03-09T21:23:10.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:10 vm10 bash[23387]: audit 2026-03-09T21:23:09.169959+0000 mon.a (mon.0) 1325 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-09T21:23:10.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:10 vm10 bash[23387]: audit 2026-03-09T21:23:09.169959+0000 mon.a (mon.0) 1325 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-09T21:23:10.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:10 vm10 bash[23387]: cluster 2026-03-09T21:23:09.799434+0000 mgr.y (mgr.24416) 269 : cluster [DBG] pgmap v475: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:10.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:10 vm10 bash[23387]: cluster 2026-03-09T21:23:09.799434+0000 mgr.y (mgr.24416) 269 : cluster [DBG] pgmap v475: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:10.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:10 vm07 bash[20771]: audit 2026-03-09T21:23:09.158048+0000 mon.a (mon.0) 1322 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T21:23:10.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:10 vm07 bash[20771]: audit 2026-03-09T21:23:09.158048+0000 mon.a (mon.0) 1322 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T21:23:10.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:10 vm07 bash[20771]: cluster 2026-03-09T21:23:09.167775+0000 mon.a (mon.0) 1323 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-09T21:23:10.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:10 vm07 bash[20771]: cluster 2026-03-09T21:23:09.167775+0000 mon.a (mon.0) 1323 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-09T21:23:10.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:10 vm07 bash[20771]: audit 2026-03-09T21:23:09.169751+0000 mon.a (mon.0) 1324 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"dne","key":"key","value":"key"}]: dispatch 2026-03-09T21:23:10.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:10 vm07 bash[20771]: audit 2026-03-09T21:23:09.169751+0000 mon.a (mon.0) 1324 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"dne","key":"key","value":"key"}]: dispatch 2026-03-09T21:23:10.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:10 vm07 bash[20771]: audit 2026-03-09T21:23:09.169959+0000 mon.a (mon.0) 1325 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-09T21:23:10.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:10 vm07 bash[20771]: audit 2026-03-09T21:23:09.169959+0000 mon.a (mon.0) 1325 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-09T21:23:10.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:10 vm07 bash[20771]: cluster 2026-03-09T21:23:09.799434+0000 mgr.y (mgr.24416) 269 : cluster [DBG] pgmap v475: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:10.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:10 vm07 bash[20771]: cluster 2026-03-09T21:23:09.799434+0000 mgr.y (mgr.24416) 269 : cluster [DBG] pgmap v475: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:10.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:10 vm07 bash[28052]: audit 2026-03-09T21:23:09.158048+0000 mon.a (mon.0) 1322 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T21:23:10.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:10 vm07 bash[28052]: audit 2026-03-09T21:23:09.158048+0000 mon.a (mon.0) 1322 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T21:23:10.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:10 vm07 bash[28052]: cluster 2026-03-09T21:23:09.167775+0000 mon.a (mon.0) 1323 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-09T21:23:10.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:10 vm07 bash[28052]: cluster 2026-03-09T21:23:09.167775+0000 mon.a (mon.0) 1323 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-09T21:23:10.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:10 vm07 bash[28052]: audit 2026-03-09T21:23:09.169751+0000 mon.a (mon.0) 1324 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"dne","key":"key","value":"key"}]: dispatch 2026-03-09T21:23:10.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:10 vm07 bash[28052]: audit 2026-03-09T21:23:09.169751+0000 mon.a (mon.0) 1324 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"dne","key":"key","value":"key"}]: dispatch 2026-03-09T21:23:10.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:10 vm07 bash[28052]: audit 2026-03-09T21:23:09.169959+0000 mon.a (mon.0) 1325 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-09T21:23:10.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:10 vm07 bash[28052]: audit 2026-03-09T21:23:09.169959+0000 mon.a (mon.0) 1325 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-09T21:23:10.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:10 vm07 bash[28052]: cluster 2026-03-09T21:23:09.799434+0000 mgr.y (mgr.24416) 269 : cluster [DBG] pgmap v475: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:10.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:10 vm07 bash[28052]: cluster 2026-03-09T21:23:09.799434+0000 mgr.y (mgr.24416) 269 : cluster [DBG] pgmap v475: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:11.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:11 vm07 bash[20771]: cluster 2026-03-09T21:23:10.158083+0000 mon.a (mon.0) 1326 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:11.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:11 vm07 bash[20771]: cluster 2026-03-09T21:23:10.158083+0000 mon.a (mon.0) 1326 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:11.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:11 vm07 bash[20771]: audit 2026-03-09T21:23:10.169233+0000 mon.a (mon.0) 1327 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]': finished 2026-03-09T21:23:11.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:11 vm07 bash[20771]: audit 2026-03-09T21:23:10.169233+0000 mon.a (mon.0) 1327 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]': finished 2026-03-09T21:23:11.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:11 vm07 bash[20771]: cluster 2026-03-09T21:23:10.183575+0000 mon.a (mon.0) 1328 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-09T21:23:11.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:11 vm07 bash[20771]: cluster 2026-03-09T21:23:10.183575+0000 mon.a (mon.0) 1328 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-09T21:23:11.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:11 vm07 bash[20771]: audit 2026-03-09T21:23:10.187397+0000 mon.a (mon.0) 1329 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-09T21:23:11.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:11 vm07 bash[20771]: audit 2026-03-09T21:23:10.187397+0000 mon.a (mon.0) 1329 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-09T21:23:11.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:11 vm07 bash[28052]: cluster 2026-03-09T21:23:10.158083+0000 mon.a (mon.0) 1326 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:11.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:11 vm07 bash[28052]: cluster 2026-03-09T21:23:10.158083+0000 mon.a (mon.0) 1326 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:11.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:11 vm07 bash[28052]: audit 2026-03-09T21:23:10.169233+0000 mon.a (mon.0) 1327 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]': finished 2026-03-09T21:23:11.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:11 vm07 bash[28052]: audit 2026-03-09T21:23:10.169233+0000 mon.a (mon.0) 1327 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]': finished 2026-03-09T21:23:11.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:11 vm07 bash[28052]: cluster 2026-03-09T21:23:10.183575+0000 mon.a (mon.0) 1328 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-09T21:23:11.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:11 vm07 bash[28052]: cluster 2026-03-09T21:23:10.183575+0000 mon.a (mon.0) 1328 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-09T21:23:11.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:11 vm07 bash[28052]: audit 2026-03-09T21:23:10.187397+0000 mon.a (mon.0) 1329 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-09T21:23:11.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:11 vm07 bash[28052]: audit 2026-03-09T21:23:10.187397+0000 mon.a (mon.0) 1329 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-09T21:23:11.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:11 vm10 bash[23387]: cluster 2026-03-09T21:23:10.158083+0000 mon.a (mon.0) 1326 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:11.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:11 vm10 bash[23387]: cluster 2026-03-09T21:23:10.158083+0000 mon.a (mon.0) 1326 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:11.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:11 vm10 bash[23387]: audit 2026-03-09T21:23:10.169233+0000 mon.a (mon.0) 1327 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]': finished 2026-03-09T21:23:11.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:11 vm10 bash[23387]: audit 2026-03-09T21:23:10.169233+0000 mon.a (mon.0) 1327 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]': finished 2026-03-09T21:23:11.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:11 vm10 bash[23387]: cluster 2026-03-09T21:23:10.183575+0000 mon.a (mon.0) 1328 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-09T21:23:11.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:11 vm10 bash[23387]: cluster 2026-03-09T21:23:10.183575+0000 mon.a (mon.0) 1328 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-09T21:23:11.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:11 vm10 bash[23387]: audit 2026-03-09T21:23:10.187397+0000 mon.a (mon.0) 1329 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-09T21:23:11.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:11 vm10 bash[23387]: audit 2026-03-09T21:23:10.187397+0000 mon.a (mon.0) 1329 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:12 vm07 bash[20771]: audit 2026-03-09T21:23:11.256176+0000 mon.a (mon.0) 1330 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]': finished 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:12 vm07 bash[20771]: audit 2026-03-09T21:23:11.256176+0000 mon.a (mon.0) 1330 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]': finished 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:12 vm07 bash[20771]: cluster 2026-03-09T21:23:11.328596+0000 mon.a (mon.0) 1331 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:12 vm07 bash[20771]: cluster 2026-03-09T21:23:11.328596+0000 mon.a (mon.0) 1331 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:12 vm07 bash[20771]: audit 2026-03-09T21:23:11.333467+0000 mon.a (mon.0) 1332 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:12 vm07 bash[20771]: audit 2026-03-09T21:23:11.333467+0000 mon.a (mon.0) 1332 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:12 vm07 bash[20771]: cluster 2026-03-09T21:23:11.799794+0000 mgr.y (mgr.24416) 270 : cluster [DBG] pgmap v478: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:12 vm07 bash[20771]: cluster 2026-03-09T21:23:11.799794+0000 mgr.y (mgr.24416) 270 : cluster [DBG] pgmap v478: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:12 vm07 bash[20771]: audit 2026-03-09T21:23:12.126255+0000 mon.c (mon.2) 127 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:12 vm07 bash[20771]: audit 2026-03-09T21:23:12.126255+0000 mon.c (mon.2) 127 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:12 vm07 bash[20771]: audit 2026-03-09T21:23:12.259239+0000 mon.a (mon.0) 1333 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]': finished 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:12 vm07 bash[20771]: audit 2026-03-09T21:23:12.259239+0000 mon.a (mon.0) 1333 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]': finished 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:12 vm07 bash[20771]: cluster 2026-03-09T21:23:12.266168+0000 mon.a (mon.0) 1334 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:12 vm07 bash[20771]: cluster 2026-03-09T21:23:12.266168+0000 mon.a (mon.0) 1334 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:12 vm07 bash[20771]: audit 2026-03-09T21:23:12.270129+0000 mon.a (mon.0) 1335 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:12 vm07 bash[20771]: audit 2026-03-09T21:23:12.270129+0000 mon.a (mon.0) 1335 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:12 vm07 bash[28052]: audit 2026-03-09T21:23:11.256176+0000 mon.a (mon.0) 1330 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]': finished 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:12 vm07 bash[28052]: audit 2026-03-09T21:23:11.256176+0000 mon.a (mon.0) 1330 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]': finished 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:12 vm07 bash[28052]: cluster 2026-03-09T21:23:11.328596+0000 mon.a (mon.0) 1331 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:12 vm07 bash[28052]: cluster 2026-03-09T21:23:11.328596+0000 mon.a (mon.0) 1331 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:12 vm07 bash[28052]: audit 2026-03-09T21:23:11.333467+0000 mon.a (mon.0) 1332 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:12 vm07 bash[28052]: audit 2026-03-09T21:23:11.333467+0000 mon.a (mon.0) 1332 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:12 vm07 bash[28052]: cluster 2026-03-09T21:23:11.799794+0000 mgr.y (mgr.24416) 270 : cluster [DBG] pgmap v478: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:12 vm07 bash[28052]: cluster 2026-03-09T21:23:11.799794+0000 mgr.y (mgr.24416) 270 : cluster [DBG] pgmap v478: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:12 vm07 bash[28052]: audit 2026-03-09T21:23:12.126255+0000 mon.c (mon.2) 127 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:12 vm07 bash[28052]: audit 2026-03-09T21:23:12.126255+0000 mon.c (mon.2) 127 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:12 vm07 bash[28052]: audit 2026-03-09T21:23:12.259239+0000 mon.a (mon.0) 1333 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]': finished 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:12 vm07 bash[28052]: audit 2026-03-09T21:23:12.259239+0000 mon.a (mon.0) 1333 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]': finished 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:12 vm07 bash[28052]: cluster 2026-03-09T21:23:12.266168+0000 mon.a (mon.0) 1334 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:12 vm07 bash[28052]: cluster 2026-03-09T21:23:12.266168+0000 mon.a (mon.0) 1334 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:12 vm07 bash[28052]: audit 2026-03-09T21:23:12.270129+0000 mon.a (mon.0) 1335 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-09T21:23:12.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:12 vm07 bash[28052]: audit 2026-03-09T21:23:12.270129+0000 mon.a (mon.0) 1335 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-09T21:23:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:12 vm10 bash[23387]: audit 2026-03-09T21:23:11.256176+0000 mon.a (mon.0) 1330 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]': finished 2026-03-09T21:23:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:12 vm10 bash[23387]: audit 2026-03-09T21:23:11.256176+0000 mon.a (mon.0) 1330 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]': finished 2026-03-09T21:23:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:12 vm10 bash[23387]: cluster 2026-03-09T21:23:11.328596+0000 mon.a (mon.0) 1331 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-09T21:23:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:12 vm10 bash[23387]: cluster 2026-03-09T21:23:11.328596+0000 mon.a (mon.0) 1331 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-09T21:23:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:12 vm10 bash[23387]: audit 2026-03-09T21:23:11.333467+0000 mon.a (mon.0) 1332 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-09T21:23:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:12 vm10 bash[23387]: audit 2026-03-09T21:23:11.333467+0000 mon.a (mon.0) 1332 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-09T21:23:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:12 vm10 bash[23387]: cluster 2026-03-09T21:23:11.799794+0000 mgr.y (mgr.24416) 270 : cluster [DBG] pgmap v478: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:23:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:12 vm10 bash[23387]: cluster 2026-03-09T21:23:11.799794+0000 mgr.y (mgr.24416) 270 : cluster [DBG] pgmap v478: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:23:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:12 vm10 bash[23387]: audit 2026-03-09T21:23:12.126255+0000 mon.c (mon.2) 127 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:23:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:12 vm10 bash[23387]: audit 2026-03-09T21:23:12.126255+0000 mon.c (mon.2) 127 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:23:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:12 vm10 bash[23387]: audit 2026-03-09T21:23:12.259239+0000 mon.a (mon.0) 1333 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]': finished 2026-03-09T21:23:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:12 vm10 bash[23387]: audit 2026-03-09T21:23:12.259239+0000 mon.a (mon.0) 1333 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]': finished 2026-03-09T21:23:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:12 vm10 bash[23387]: cluster 2026-03-09T21:23:12.266168+0000 mon.a (mon.0) 1334 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-09T21:23:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:12 vm10 bash[23387]: cluster 2026-03-09T21:23:12.266168+0000 mon.a (mon.0) 1334 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-09T21:23:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:12 vm10 bash[23387]: audit 2026-03-09T21:23:12.270129+0000 mon.a (mon.0) 1335 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-09T21:23:12.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:12 vm10 bash[23387]: audit 2026-03-09T21:23:12.270129+0000 mon.a (mon.0) 1335 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-09T21:23:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:14 vm07 bash[20771]: audit 2026-03-09T21:23:13.262287+0000 mon.a (mon.0) 1336 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]': finished 2026-03-09T21:23:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:14 vm07 bash[20771]: audit 2026-03-09T21:23:13.262287+0000 mon.a (mon.0) 1336 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]': finished 2026-03-09T21:23:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:14 vm07 bash[20771]: cluster 2026-03-09T21:23:13.271781+0000 mon.a (mon.0) 1337 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-09T21:23:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:14 vm07 bash[20771]: cluster 2026-03-09T21:23:13.271781+0000 mon.a (mon.0) 1337 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-09T21:23:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:14 vm07 bash[20771]: audit 2026-03-09T21:23:13.276941+0000 mon.a (mon.0) 1338 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:14 vm07 bash[20771]: audit 2026-03-09T21:23:13.276941+0000 mon.a (mon.0) 1338 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:14 vm07 bash[20771]: cluster 2026-03-09T21:23:13.800066+0000 mgr.y (mgr.24416) 271 : cluster [DBG] pgmap v481: 196 pgs: 196 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:14.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:14 vm07 bash[20771]: cluster 2026-03-09T21:23:13.800066+0000 mgr.y (mgr.24416) 271 : cluster [DBG] pgmap v481: 196 pgs: 196 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:14.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:14 vm07 bash[28052]: audit 2026-03-09T21:23:13.262287+0000 mon.a (mon.0) 1336 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]': finished 2026-03-09T21:23:14.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:14 vm07 bash[28052]: audit 2026-03-09T21:23:13.262287+0000 mon.a (mon.0) 1336 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]': finished 2026-03-09T21:23:14.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:14 vm07 bash[28052]: cluster 2026-03-09T21:23:13.271781+0000 mon.a (mon.0) 1337 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-09T21:23:14.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:14 vm07 bash[28052]: cluster 2026-03-09T21:23:13.271781+0000 mon.a (mon.0) 1337 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-09T21:23:14.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:14 vm07 bash[28052]: audit 2026-03-09T21:23:13.276941+0000 mon.a (mon.0) 1338 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:14.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:14 vm07 bash[28052]: audit 2026-03-09T21:23:13.276941+0000 mon.a (mon.0) 1338 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:14.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:14 vm07 bash[28052]: cluster 2026-03-09T21:23:13.800066+0000 mgr.y (mgr.24416) 271 : cluster [DBG] pgmap v481: 196 pgs: 196 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:14.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:14 vm07 bash[28052]: cluster 2026-03-09T21:23:13.800066+0000 mgr.y (mgr.24416) 271 : cluster [DBG] pgmap v481: 196 pgs: 196 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:14.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:14 vm10 bash[23387]: audit 2026-03-09T21:23:13.262287+0000 mon.a (mon.0) 1336 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]': finished 2026-03-09T21:23:14.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:14 vm10 bash[23387]: audit 2026-03-09T21:23:13.262287+0000 mon.a (mon.0) 1336 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]': finished 2026-03-09T21:23:14.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:14 vm10 bash[23387]: cluster 2026-03-09T21:23:13.271781+0000 mon.a (mon.0) 1337 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-09T21:23:14.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:14 vm10 bash[23387]: cluster 2026-03-09T21:23:13.271781+0000 mon.a (mon.0) 1337 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-09T21:23:14.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:14 vm10 bash[23387]: audit 2026-03-09T21:23:13.276941+0000 mon.a (mon.0) 1338 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:14.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:14 vm10 bash[23387]: audit 2026-03-09T21:23:13.276941+0000 mon.a (mon.0) 1338 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:14.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:14 vm10 bash[23387]: cluster 2026-03-09T21:23:13.800066+0000 mgr.y (mgr.24416) 271 : cluster [DBG] pgmap v481: 196 pgs: 196 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:14.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:14 vm10 bash[23387]: cluster 2026-03-09T21:23:13.800066+0000 mgr.y (mgr.24416) 271 : cluster [DBG] pgmap v481: 196 pgs: 196 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:15.281 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_applications PASSED [ 84%] 2026-03-09T21:23:15.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:15 vm07 bash[20771]: audit 2026-03-09T21:23:14.270841+0000 mon.a (mon.0) 1339 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:15.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:15 vm07 bash[20771]: audit 2026-03-09T21:23:14.270841+0000 mon.a (mon.0) 1339 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:15.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:15 vm07 bash[20771]: cluster 2026-03-09T21:23:14.284869+0000 mon.a (mon.0) 1340 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-09T21:23:15.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:15 vm07 bash[20771]: cluster 2026-03-09T21:23:14.284869+0000 mon.a (mon.0) 1340 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-09T21:23:15.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:15 vm07 bash[28052]: audit 2026-03-09T21:23:14.270841+0000 mon.a (mon.0) 1339 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:15.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:15 vm07 bash[28052]: audit 2026-03-09T21:23:14.270841+0000 mon.a (mon.0) 1339 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:15.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:15 vm07 bash[28052]: cluster 2026-03-09T21:23:14.284869+0000 mon.a (mon.0) 1340 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-09T21:23:15.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:15 vm07 bash[28052]: cluster 2026-03-09T21:23:14.284869+0000 mon.a (mon.0) 1340 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-09T21:23:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:15 vm10 bash[23387]: audit 2026-03-09T21:23:14.270841+0000 mon.a (mon.0) 1339 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:15 vm10 bash[23387]: audit 2026-03-09T21:23:14.270841+0000 mon.a (mon.0) 1339 : audit [INF] from='client.? 192.168.123.107:0/2101620924' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:15 vm10 bash[23387]: cluster 2026-03-09T21:23:14.284869+0000 mon.a (mon.0) 1340 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-09T21:23:15.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:15 vm10 bash[23387]: cluster 2026-03-09T21:23:14.284869+0000 mon.a (mon.0) 1340 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-09T21:23:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:16 vm07 bash[20771]: cluster 2026-03-09T21:23:15.278243+0000 mon.a (mon.0) 1341 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-09T21:23:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:16 vm07 bash[20771]: cluster 2026-03-09T21:23:15.278243+0000 mon.a (mon.0) 1341 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-09T21:23:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:16 vm07 bash[20771]: cluster 2026-03-09T21:23:15.800322+0000 mgr.y (mgr.24416) 272 : cluster [DBG] pgmap v484: 164 pgs: 164 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:16.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:16 vm07 bash[20771]: cluster 2026-03-09T21:23:15.800322+0000 mgr.y (mgr.24416) 272 : cluster [DBG] pgmap v484: 164 pgs: 164 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:16 vm07 bash[28052]: cluster 2026-03-09T21:23:15.278243+0000 mon.a (mon.0) 1341 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-09T21:23:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:16 vm07 bash[28052]: cluster 2026-03-09T21:23:15.278243+0000 mon.a (mon.0) 1341 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-09T21:23:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:16 vm07 bash[28052]: cluster 2026-03-09T21:23:15.800322+0000 mgr.y (mgr.24416) 272 : cluster [DBG] pgmap v484: 164 pgs: 164 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:16.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:16 vm07 bash[28052]: cluster 2026-03-09T21:23:15.800322+0000 mgr.y (mgr.24416) 272 : cluster [DBG] pgmap v484: 164 pgs: 164 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:16 vm10 bash[23387]: cluster 2026-03-09T21:23:15.278243+0000 mon.a (mon.0) 1341 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-09T21:23:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:16 vm10 bash[23387]: cluster 2026-03-09T21:23:15.278243+0000 mon.a (mon.0) 1341 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-09T21:23:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:16 vm10 bash[23387]: cluster 2026-03-09T21:23:15.800322+0000 mgr.y (mgr.24416) 272 : cluster [DBG] pgmap v484: 164 pgs: 164 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:16.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:16 vm10 bash[23387]: cluster 2026-03-09T21:23:15.800322+0000 mgr.y (mgr.24416) 272 : cluster [DBG] pgmap v484: 164 pgs: 164 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:16.692 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:23:16 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:23:17.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:17 vm07 bash[20771]: cluster 2026-03-09T21:23:16.338387+0000 mon.a (mon.0) 1342 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-09T21:23:17.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:17 vm07 bash[20771]: cluster 2026-03-09T21:23:16.338387+0000 mon.a (mon.0) 1342 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-09T21:23:17.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:17 vm07 bash[20771]: audit 2026-03-09T21:23:16.343295+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.107:0/3766712537' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:17.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:17 vm07 bash[20771]: audit 2026-03-09T21:23:16.343295+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.107:0/3766712537' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:17.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:17 vm07 bash[20771]: audit 2026-03-09T21:23:16.344065+0000 mon.a (mon.0) 1343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:17.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:17 vm07 bash[20771]: audit 2026-03-09T21:23:16.344065+0000 mon.a (mon.0) 1343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:17.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:17 vm07 bash[20771]: audit 2026-03-09T21:23:16.494324+0000 mgr.y (mgr.24416) 273 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:17.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:17 vm07 bash[20771]: audit 2026-03-09T21:23:16.494324+0000 mgr.y (mgr.24416) 273 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:17.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:17 vm07 bash[28052]: cluster 2026-03-09T21:23:16.338387+0000 mon.a (mon.0) 1342 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-09T21:23:17.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:17 vm07 bash[28052]: cluster 2026-03-09T21:23:16.338387+0000 mon.a (mon.0) 1342 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-09T21:23:17.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:17 vm07 bash[28052]: audit 2026-03-09T21:23:16.343295+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.107:0/3766712537' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:17.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:17 vm07 bash[28052]: audit 2026-03-09T21:23:16.343295+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.107:0/3766712537' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:17.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:17 vm07 bash[28052]: audit 2026-03-09T21:23:16.344065+0000 mon.a (mon.0) 1343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:17.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:17 vm07 bash[28052]: audit 2026-03-09T21:23:16.344065+0000 mon.a (mon.0) 1343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:17.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:17 vm07 bash[28052]: audit 2026-03-09T21:23:16.494324+0000 mgr.y (mgr.24416) 273 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:17.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:17 vm07 bash[28052]: audit 2026-03-09T21:23:16.494324+0000 mgr.y (mgr.24416) 273 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:17 vm10 bash[23387]: cluster 2026-03-09T21:23:16.338387+0000 mon.a (mon.0) 1342 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-09T21:23:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:17 vm10 bash[23387]: cluster 2026-03-09T21:23:16.338387+0000 mon.a (mon.0) 1342 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-09T21:23:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:17 vm10 bash[23387]: audit 2026-03-09T21:23:16.343295+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.107:0/3766712537' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:17 vm10 bash[23387]: audit 2026-03-09T21:23:16.343295+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.107:0/3766712537' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:17 vm10 bash[23387]: audit 2026-03-09T21:23:16.344065+0000 mon.a (mon.0) 1343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:17 vm10 bash[23387]: audit 2026-03-09T21:23:16.344065+0000 mon.a (mon.0) 1343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:17 vm10 bash[23387]: audit 2026-03-09T21:23:16.494324+0000 mgr.y (mgr.24416) 273 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:17.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:17 vm10 bash[23387]: audit 2026-03-09T21:23:16.494324+0000 mgr.y (mgr.24416) 273 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:18.348 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_service_daemon PASSED [ 85%] 2026-03-09T21:23:18.652 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:18 vm07 bash[20771]: audit 2026-03-09T21:23:17.318626+0000 mon.a (mon.0) 1344 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:18.652 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:18 vm07 bash[20771]: audit 2026-03-09T21:23:17.318626+0000 mon.a (mon.0) 1344 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:18.652 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:18 vm07 bash[20771]: cluster 2026-03-09T21:23:17.335571+0000 mon.a (mon.0) 1345 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-09T21:23:18.652 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:18 vm07 bash[20771]: cluster 2026-03-09T21:23:17.335571+0000 mon.a (mon.0) 1345 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-09T21:23:18.652 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:18 vm07 bash[20771]: cluster 2026-03-09T21:23:17.800859+0000 mgr.y (mgr.24416) 274 : cluster [DBG] pgmap v487: 196 pgs: 196 active+clean; 455 KiB data, 475 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:18.652 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:18 vm07 bash[20771]: cluster 2026-03-09T21:23:17.800859+0000 mgr.y (mgr.24416) 274 : cluster [DBG] pgmap v487: 196 pgs: 196 active+clean; 455 KiB data, 475 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:18.652 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:18 vm07 bash[20771]: cluster 2026-03-09T21:23:18.346577+0000 mon.a (mon.0) 1346 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-09T21:23:18.652 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:18 vm07 bash[20771]: cluster 2026-03-09T21:23:18.346577+0000 mon.a (mon.0) 1346 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-09T21:23:18.652 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:18 vm07 bash[28052]: audit 2026-03-09T21:23:17.318626+0000 mon.a (mon.0) 1344 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:18.652 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:18 vm07 bash[28052]: audit 2026-03-09T21:23:17.318626+0000 mon.a (mon.0) 1344 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:18.652 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:18 vm07 bash[28052]: cluster 2026-03-09T21:23:17.335571+0000 mon.a (mon.0) 1345 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-09T21:23:18.652 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:18 vm07 bash[28052]: cluster 2026-03-09T21:23:17.335571+0000 mon.a (mon.0) 1345 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-09T21:23:18.652 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:18 vm07 bash[28052]: cluster 2026-03-09T21:23:17.800859+0000 mgr.y (mgr.24416) 274 : cluster [DBG] pgmap v487: 196 pgs: 196 active+clean; 455 KiB data, 475 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:18.652 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:18 vm07 bash[28052]: cluster 2026-03-09T21:23:17.800859+0000 mgr.y (mgr.24416) 274 : cluster [DBG] pgmap v487: 196 pgs: 196 active+clean; 455 KiB data, 475 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:18.652 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:18 vm07 bash[28052]: cluster 2026-03-09T21:23:18.346577+0000 mon.a (mon.0) 1346 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-09T21:23:18.652 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:18 vm07 bash[28052]: cluster 2026-03-09T21:23:18.346577+0000 mon.a (mon.0) 1346 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-09T21:23:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:18 vm10 bash[23387]: audit 2026-03-09T21:23:17.318626+0000 mon.a (mon.0) 1344 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:18 vm10 bash[23387]: audit 2026-03-09T21:23:17.318626+0000 mon.a (mon.0) 1344 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:18 vm10 bash[23387]: cluster 2026-03-09T21:23:17.335571+0000 mon.a (mon.0) 1345 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-09T21:23:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:18 vm10 bash[23387]: cluster 2026-03-09T21:23:17.335571+0000 mon.a (mon.0) 1345 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-09T21:23:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:18 vm10 bash[23387]: cluster 2026-03-09T21:23:17.800859+0000 mgr.y (mgr.24416) 274 : cluster [DBG] pgmap v487: 196 pgs: 196 active+clean; 455 KiB data, 475 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:18 vm10 bash[23387]: cluster 2026-03-09T21:23:17.800859+0000 mgr.y (mgr.24416) 274 : cluster [DBG] pgmap v487: 196 pgs: 196 active+clean; 455 KiB data, 475 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:18 vm10 bash[23387]: cluster 2026-03-09T21:23:18.346577+0000 mon.a (mon.0) 1346 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-09T21:23:18.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:18 vm10 bash[23387]: cluster 2026-03-09T21:23:18.346577+0000 mon.a (mon.0) 1346 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-09T21:23:19.114 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:23:18 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:23:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:23:19.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:19 vm10 bash[23387]: cluster 2026-03-09T21:23:18.355231+0000 mon.a (mon.0) 1347 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:19.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:19 vm10 bash[23387]: cluster 2026-03-09T21:23:18.355231+0000 mon.a (mon.0) 1347 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:19.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:19 vm10 bash[23387]: cluster 2026-03-09T21:23:19.367699+0000 mon.a (mon.0) 1348 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-09T21:23:19.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:19 vm10 bash[23387]: cluster 2026-03-09T21:23:19.367699+0000 mon.a (mon.0) 1348 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-09T21:23:19.864 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:19 vm07 bash[20771]: cluster 2026-03-09T21:23:18.355231+0000 mon.a (mon.0) 1347 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:19.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:19 vm07 bash[20771]: cluster 2026-03-09T21:23:18.355231+0000 mon.a (mon.0) 1347 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:19.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:19 vm07 bash[20771]: cluster 2026-03-09T21:23:19.367699+0000 mon.a (mon.0) 1348 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-09T21:23:19.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:19 vm07 bash[20771]: cluster 2026-03-09T21:23:19.367699+0000 mon.a (mon.0) 1348 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-09T21:23:19.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:19 vm07 bash[28052]: cluster 2026-03-09T21:23:18.355231+0000 mon.a (mon.0) 1347 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:19.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:19 vm07 bash[28052]: cluster 2026-03-09T21:23:18.355231+0000 mon.a (mon.0) 1347 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:19.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:19 vm07 bash[28052]: cluster 2026-03-09T21:23:19.367699+0000 mon.a (mon.0) 1348 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-09T21:23:19.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:19 vm07 bash[28052]: cluster 2026-03-09T21:23:19.367699+0000 mon.a (mon.0) 1348 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-09T21:23:20.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:20 vm10 bash[23387]: audit 2026-03-09T21:23:19.376800+0000 mon.a (mon.0) 1349 : audit [INF] from='client.? 192.168.123.107:0/435508309' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:20.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:20 vm10 bash[23387]: audit 2026-03-09T21:23:19.376800+0000 mon.a (mon.0) 1349 : audit [INF] from='client.? 192.168.123.107:0/435508309' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:20.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:20 vm10 bash[23387]: cluster 2026-03-09T21:23:19.801142+0000 mgr.y (mgr.24416) 275 : cluster [DBG] pgmap v490: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 475 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:20.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:20 vm10 bash[23387]: cluster 2026-03-09T21:23:19.801142+0000 mgr.y (mgr.24416) 275 : cluster [DBG] pgmap v490: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 475 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:20.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:20 vm07 bash[20771]: audit 2026-03-09T21:23:19.376800+0000 mon.a (mon.0) 1349 : audit [INF] from='client.? 192.168.123.107:0/435508309' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:20.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:20 vm07 bash[20771]: audit 2026-03-09T21:23:19.376800+0000 mon.a (mon.0) 1349 : audit [INF] from='client.? 192.168.123.107:0/435508309' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:20.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:20 vm07 bash[20771]: cluster 2026-03-09T21:23:19.801142+0000 mgr.y (mgr.24416) 275 : cluster [DBG] pgmap v490: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 475 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:20.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:20 vm07 bash[20771]: cluster 2026-03-09T21:23:19.801142+0000 mgr.y (mgr.24416) 275 : cluster [DBG] pgmap v490: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 475 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:20.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:20 vm07 bash[28052]: audit 2026-03-09T21:23:19.376800+0000 mon.a (mon.0) 1349 : audit [INF] from='client.? 192.168.123.107:0/435508309' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:20.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:20 vm07 bash[28052]: audit 2026-03-09T21:23:19.376800+0000 mon.a (mon.0) 1349 : audit [INF] from='client.? 192.168.123.107:0/435508309' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:20.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:20 vm07 bash[28052]: cluster 2026-03-09T21:23:19.801142+0000 mgr.y (mgr.24416) 275 : cluster [DBG] pgmap v490: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 475 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:20.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:20 vm07 bash[28052]: cluster 2026-03-09T21:23:19.801142+0000 mgr.y (mgr.24416) 275 : cluster [DBG] pgmap v490: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 475 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:21.384 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_alignment PASSED [ 86%] 2026-03-09T21:23:21.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:21 vm10 bash[23387]: audit 2026-03-09T21:23:20.365650+0000 mon.a (mon.0) 1350 : audit [INF] from='client.? 192.168.123.107:0/435508309' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:21.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:21 vm10 bash[23387]: audit 2026-03-09T21:23:20.365650+0000 mon.a (mon.0) 1350 : audit [INF] from='client.? 192.168.123.107:0/435508309' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:21.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:21 vm10 bash[23387]: cluster 2026-03-09T21:23:20.374768+0000 mon.a (mon.0) 1351 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-09T21:23:21.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:21 vm10 bash[23387]: cluster 2026-03-09T21:23:20.374768+0000 mon.a (mon.0) 1351 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-09T21:23:21.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:21 vm10 bash[23387]: cluster 2026-03-09T21:23:21.383040+0000 mon.a (mon.0) 1352 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-09T21:23:21.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:21 vm10 bash[23387]: cluster 2026-03-09T21:23:21.383040+0000 mon.a (mon.0) 1352 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-09T21:23:21.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:21 vm10 bash[23387]: audit 2026-03-09T21:23:21.393877+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.107:0/813851910' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T21:23:21.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:21 vm10 bash[23387]: audit 2026-03-09T21:23:21.393877+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.107:0/813851910' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T21:23:21.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:21 vm10 bash[23387]: audit 2026-03-09T21:23:21.395489+0000 mon.a (mon.0) 1353 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T21:23:21.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:21 vm10 bash[23387]: audit 2026-03-09T21:23:21.395489+0000 mon.a (mon.0) 1353 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T21:23:21.864 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:21 vm07 bash[20771]: audit 2026-03-09T21:23:20.365650+0000 mon.a (mon.0) 1350 : audit [INF] from='client.? 192.168.123.107:0/435508309' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:21.864 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:21 vm07 bash[20771]: audit 2026-03-09T21:23:20.365650+0000 mon.a (mon.0) 1350 : audit [INF] from='client.? 192.168.123.107:0/435508309' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:21.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:21 vm07 bash[20771]: cluster 2026-03-09T21:23:20.374768+0000 mon.a (mon.0) 1351 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-09T21:23:21.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:21 vm07 bash[20771]: cluster 2026-03-09T21:23:20.374768+0000 mon.a (mon.0) 1351 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-09T21:23:21.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:21 vm07 bash[20771]: cluster 2026-03-09T21:23:21.383040+0000 mon.a (mon.0) 1352 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-09T21:23:21.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:21 vm07 bash[20771]: cluster 2026-03-09T21:23:21.383040+0000 mon.a (mon.0) 1352 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-09T21:23:21.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:21 vm07 bash[20771]: audit 2026-03-09T21:23:21.393877+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.107:0/813851910' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T21:23:21.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:21 vm07 bash[20771]: audit 2026-03-09T21:23:21.393877+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.107:0/813851910' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T21:23:21.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:21 vm07 bash[20771]: audit 2026-03-09T21:23:21.395489+0000 mon.a (mon.0) 1353 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T21:23:21.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:21 vm07 bash[20771]: audit 2026-03-09T21:23:21.395489+0000 mon.a (mon.0) 1353 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T21:23:21.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:21 vm07 bash[28052]: audit 2026-03-09T21:23:20.365650+0000 mon.a (mon.0) 1350 : audit [INF] from='client.? 192.168.123.107:0/435508309' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:21.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:21 vm07 bash[28052]: audit 2026-03-09T21:23:20.365650+0000 mon.a (mon.0) 1350 : audit [INF] from='client.? 192.168.123.107:0/435508309' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:21.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:21 vm07 bash[28052]: cluster 2026-03-09T21:23:20.374768+0000 mon.a (mon.0) 1351 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-09T21:23:21.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:21 vm07 bash[28052]: cluster 2026-03-09T21:23:20.374768+0000 mon.a (mon.0) 1351 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-09T21:23:21.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:21 vm07 bash[28052]: cluster 2026-03-09T21:23:21.383040+0000 mon.a (mon.0) 1352 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-09T21:23:21.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:21 vm07 bash[28052]: cluster 2026-03-09T21:23:21.383040+0000 mon.a (mon.0) 1352 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-09T21:23:21.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:21 vm07 bash[28052]: audit 2026-03-09T21:23:21.393877+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.107:0/813851910' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T21:23:21.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:21 vm07 bash[28052]: audit 2026-03-09T21:23:21.393877+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.107:0/813851910' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T21:23:21.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:21 vm07 bash[28052]: audit 2026-03-09T21:23:21.395489+0000 mon.a (mon.0) 1353 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T21:23:21.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:21 vm07 bash[28052]: audit 2026-03-09T21:23:21.395489+0000 mon.a (mon.0) 1353 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T21:23:22.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:22 vm10 bash[23387]: cluster 2026-03-09T21:23:21.801395+0000 mgr.y (mgr.24416) 276 : cluster [DBG] pgmap v493: 164 pgs: 164 active+clean; 455 KiB data, 475 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:23:22.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:22 vm10 bash[23387]: cluster 2026-03-09T21:23:21.801395+0000 mgr.y (mgr.24416) 276 : cluster [DBG] pgmap v493: 164 pgs: 164 active+clean; 455 KiB data, 475 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:23:22.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:22 vm10 bash[23387]: audit 2026-03-09T21:23:22.372349+0000 mon.a (mon.0) 1354 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T21:23:22.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:22 vm10 bash[23387]: audit 2026-03-09T21:23:22.372349+0000 mon.a (mon.0) 1354 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T21:23:22.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:22 vm10 bash[23387]: cluster 2026-03-09T21:23:22.376142+0000 mon.a (mon.0) 1355 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-09T21:23:22.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:22 vm10 bash[23387]: cluster 2026-03-09T21:23:22.376142+0000 mon.a (mon.0) 1355 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-09T21:23:22.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:22 vm10 bash[23387]: audit 2026-03-09T21:23:22.376797+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.107:0/813851910' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-09T21:23:22.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:22 vm10 bash[23387]: audit 2026-03-09T21:23:22.376797+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.107:0/813851910' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-09T21:23:22.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:22 vm10 bash[23387]: audit 2026-03-09T21:23:22.385786+0000 mon.a (mon.0) 1356 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-09T21:23:22.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:22 vm10 bash[23387]: audit 2026-03-09T21:23:22.385786+0000 mon.a (mon.0) 1356 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-09T21:23:22.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:22 vm07 bash[20771]: cluster 2026-03-09T21:23:21.801395+0000 mgr.y (mgr.24416) 276 : cluster [DBG] pgmap v493: 164 pgs: 164 active+clean; 455 KiB data, 475 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:23:22.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:22 vm07 bash[20771]: cluster 2026-03-09T21:23:21.801395+0000 mgr.y (mgr.24416) 276 : cluster [DBG] pgmap v493: 164 pgs: 164 active+clean; 455 KiB data, 475 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:23:22.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:22 vm07 bash[20771]: audit 2026-03-09T21:23:22.372349+0000 mon.a (mon.0) 1354 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T21:23:22.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:22 vm07 bash[20771]: audit 2026-03-09T21:23:22.372349+0000 mon.a (mon.0) 1354 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T21:23:22.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:22 vm07 bash[20771]: cluster 2026-03-09T21:23:22.376142+0000 mon.a (mon.0) 1355 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-09T21:23:22.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:22 vm07 bash[20771]: cluster 2026-03-09T21:23:22.376142+0000 mon.a (mon.0) 1355 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-09T21:23:22.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:22 vm07 bash[20771]: audit 2026-03-09T21:23:22.376797+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.107:0/813851910' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-09T21:23:22.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:22 vm07 bash[20771]: audit 2026-03-09T21:23:22.376797+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.107:0/813851910' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-09T21:23:22.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:22 vm07 bash[20771]: audit 2026-03-09T21:23:22.385786+0000 mon.a (mon.0) 1356 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-09T21:23:22.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:22 vm07 bash[20771]: audit 2026-03-09T21:23:22.385786+0000 mon.a (mon.0) 1356 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-09T21:23:22.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:22 vm07 bash[28052]: cluster 2026-03-09T21:23:21.801395+0000 mgr.y (mgr.24416) 276 : cluster [DBG] pgmap v493: 164 pgs: 164 active+clean; 455 KiB data, 475 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:23:22.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:22 vm07 bash[28052]: cluster 2026-03-09T21:23:21.801395+0000 mgr.y (mgr.24416) 276 : cluster [DBG] pgmap v493: 164 pgs: 164 active+clean; 455 KiB data, 475 MiB used, 160 GiB / 160 GiB avail 2026-03-09T21:23:22.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:22 vm07 bash[28052]: audit 2026-03-09T21:23:22.372349+0000 mon.a (mon.0) 1354 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T21:23:22.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:22 vm07 bash[28052]: audit 2026-03-09T21:23:22.372349+0000 mon.a (mon.0) 1354 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T21:23:22.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:22 vm07 bash[28052]: cluster 2026-03-09T21:23:22.376142+0000 mon.a (mon.0) 1355 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-09T21:23:22.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:22 vm07 bash[28052]: cluster 2026-03-09T21:23:22.376142+0000 mon.a (mon.0) 1355 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-09T21:23:22.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:22 vm07 bash[28052]: audit 2026-03-09T21:23:22.376797+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.107:0/813851910' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-09T21:23:22.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:22 vm07 bash[28052]: audit 2026-03-09T21:23:22.376797+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.107:0/813851910' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-09T21:23:22.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:22 vm07 bash[28052]: audit 2026-03-09T21:23:22.385786+0000 mon.a (mon.0) 1356 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-09T21:23:22.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:22 vm07 bash[28052]: audit 2026-03-09T21:23:22.385786+0000 mon.a (mon.0) 1356 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-09T21:23:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:24 vm10 bash[23387]: cluster 2026-03-09T21:23:23.394537+0000 mon.a (mon.0) 1357 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-09T21:23:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:24 vm10 bash[23387]: cluster 2026-03-09T21:23:23.394537+0000 mon.a (mon.0) 1357 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-09T21:23:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:24 vm10 bash[23387]: cluster 2026-03-09T21:23:23.801741+0000 mgr.y (mgr.24416) 277 : cluster [DBG] pgmap v496: 164 pgs: 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:24.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:24 vm10 bash[23387]: cluster 2026-03-09T21:23:23.801741+0000 mgr.y (mgr.24416) 277 : cluster [DBG] pgmap v496: 164 pgs: 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:24.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:24 vm07 bash[20771]: cluster 2026-03-09T21:23:23.394537+0000 mon.a (mon.0) 1357 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-09T21:23:24.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:24 vm07 bash[20771]: cluster 2026-03-09T21:23:23.394537+0000 mon.a (mon.0) 1357 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-09T21:23:24.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:24 vm07 bash[20771]: cluster 2026-03-09T21:23:23.801741+0000 mgr.y (mgr.24416) 277 : cluster [DBG] pgmap v496: 164 pgs: 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:24.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:24 vm07 bash[20771]: cluster 2026-03-09T21:23:23.801741+0000 mgr.y (mgr.24416) 277 : cluster [DBG] pgmap v496: 164 pgs: 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:24.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:24 vm07 bash[28052]: cluster 2026-03-09T21:23:23.394537+0000 mon.a (mon.0) 1357 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-09T21:23:24.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:24 vm07 bash[28052]: cluster 2026-03-09T21:23:23.394537+0000 mon.a (mon.0) 1357 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-09T21:23:24.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:24 vm07 bash[28052]: cluster 2026-03-09T21:23:23.801741+0000 mgr.y (mgr.24416) 277 : cluster [DBG] pgmap v496: 164 pgs: 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:24.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:24 vm07 bash[28052]: cluster 2026-03-09T21:23:23.801741+0000 mgr.y (mgr.24416) 277 : cluster [DBG] pgmap v496: 164 pgs: 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:25.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:25 vm10 bash[23387]: audit 2026-03-09T21:23:24.388676+0000 mon.a (mon.0) 1358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]': finished 2026-03-09T21:23:25.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:25 vm10 bash[23387]: audit 2026-03-09T21:23:24.388676+0000 mon.a (mon.0) 1358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]': finished 2026-03-09T21:23:25.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:25 vm10 bash[23387]: cluster 2026-03-09T21:23:24.401327+0000 mon.a (mon.0) 1359 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-09T21:23:25.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:25 vm10 bash[23387]: cluster 2026-03-09T21:23:24.401327+0000 mon.a (mon.0) 1359 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-09T21:23:25.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:25 vm10 bash[23387]: audit 2026-03-09T21:23:24.404192+0000 mon.b (mon.1) 71 : audit [INF] from='client.? 192.168.123.107:0/813851910' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:25.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:25 vm10 bash[23387]: audit 2026-03-09T21:23:24.404192+0000 mon.b (mon.1) 71 : audit [INF] from='client.? 192.168.123.107:0/813851910' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:25.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:25 vm10 bash[23387]: audit 2026-03-09T21:23:24.404973+0000 mon.a (mon.0) 1360 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:25.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:25 vm10 bash[23387]: audit 2026-03-09T21:23:24.404973+0000 mon.a (mon.0) 1360 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:25.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:25 vm10 bash[23387]: cluster 2026-03-09T21:23:24.826935+0000 mon.a (mon.0) 1361 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:25.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:25 vm10 bash[23387]: cluster 2026-03-09T21:23:24.826935+0000 mon.a (mon.0) 1361 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:25.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:25 vm07 bash[20771]: audit 2026-03-09T21:23:24.388676+0000 mon.a (mon.0) 1358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]': finished 2026-03-09T21:23:25.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:25 vm07 bash[20771]: audit 2026-03-09T21:23:24.388676+0000 mon.a (mon.0) 1358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]': finished 2026-03-09T21:23:25.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:25 vm07 bash[20771]: cluster 2026-03-09T21:23:24.401327+0000 mon.a (mon.0) 1359 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-09T21:23:25.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:25 vm07 bash[20771]: cluster 2026-03-09T21:23:24.401327+0000 mon.a (mon.0) 1359 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-09T21:23:25.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:25 vm07 bash[20771]: audit 2026-03-09T21:23:24.404192+0000 mon.b (mon.1) 71 : audit [INF] from='client.? 192.168.123.107:0/813851910' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:25.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:25 vm07 bash[20771]: audit 2026-03-09T21:23:24.404192+0000 mon.b (mon.1) 71 : audit [INF] from='client.? 192.168.123.107:0/813851910' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:25.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:25 vm07 bash[20771]: audit 2026-03-09T21:23:24.404973+0000 mon.a (mon.0) 1360 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:25.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:25 vm07 bash[20771]: audit 2026-03-09T21:23:24.404973+0000 mon.a (mon.0) 1360 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:25.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:25 vm07 bash[20771]: cluster 2026-03-09T21:23:24.826935+0000 mon.a (mon.0) 1361 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:25.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:25 vm07 bash[20771]: cluster 2026-03-09T21:23:24.826935+0000 mon.a (mon.0) 1361 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:25.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:25 vm07 bash[28052]: audit 2026-03-09T21:23:24.388676+0000 mon.a (mon.0) 1358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]': finished 2026-03-09T21:23:25.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:25 vm07 bash[28052]: audit 2026-03-09T21:23:24.388676+0000 mon.a (mon.0) 1358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]': finished 2026-03-09T21:23:25.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:25 vm07 bash[28052]: cluster 2026-03-09T21:23:24.401327+0000 mon.a (mon.0) 1359 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-09T21:23:25.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:25 vm07 bash[28052]: cluster 2026-03-09T21:23:24.401327+0000 mon.a (mon.0) 1359 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-09T21:23:25.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:25 vm07 bash[28052]: audit 2026-03-09T21:23:24.404192+0000 mon.b (mon.1) 71 : audit [INF] from='client.? 192.168.123.107:0/813851910' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:25.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:25 vm07 bash[28052]: audit 2026-03-09T21:23:24.404192+0000 mon.b (mon.1) 71 : audit [INF] from='client.? 192.168.123.107:0/813851910' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:25.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:25 vm07 bash[28052]: audit 2026-03-09T21:23:24.404973+0000 mon.a (mon.0) 1360 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:25.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:25 vm07 bash[28052]: audit 2026-03-09T21:23:24.404973+0000 mon.a (mon.0) 1360 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:25.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:25 vm07 bash[28052]: cluster 2026-03-09T21:23:24.826935+0000 mon.a (mon.0) 1361 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:25.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:25 vm07 bash[28052]: cluster 2026-03-09T21:23:24.826935+0000 mon.a (mon.0) 1361 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:26.421 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctxEc::test_alignment PASSED [ 87%] 2026-03-09T21:23:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:26 vm10 bash[23387]: audit 2026-03-09T21:23:25.392279+0000 mon.a (mon.0) 1362 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:26 vm10 bash[23387]: audit 2026-03-09T21:23:25.392279+0000 mon.a (mon.0) 1362 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:26 vm10 bash[23387]: cluster 2026-03-09T21:23:25.403659+0000 mon.a (mon.0) 1363 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-09T21:23:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:26 vm10 bash[23387]: cluster 2026-03-09T21:23:25.403659+0000 mon.a (mon.0) 1363 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-09T21:23:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:26 vm10 bash[23387]: cluster 2026-03-09T21:23:25.802094+0000 mgr.y (mgr.24416) 278 : cluster [DBG] pgmap v499: 172 pgs: 8 unknown, 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:26 vm10 bash[23387]: cluster 2026-03-09T21:23:25.802094+0000 mgr.y (mgr.24416) 278 : cluster [DBG] pgmap v499: 172 pgs: 8 unknown, 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:26 vm10 bash[23387]: audit 2026-03-09T21:23:25.930305+0000 mon.c (mon.2) 128 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:23:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:26 vm10 bash[23387]: audit 2026-03-09T21:23:25.930305+0000 mon.c (mon.2) 128 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:23:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:26 vm10 bash[23387]: audit 2026-03-09T21:23:26.294431+0000 mon.c (mon.2) 129 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:23:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:26 vm10 bash[23387]: audit 2026-03-09T21:23:26.294431+0000 mon.c (mon.2) 129 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:23:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:26 vm10 bash[23387]: audit 2026-03-09T21:23:26.295606+0000 mon.c (mon.2) 130 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:23:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:26 vm10 bash[23387]: audit 2026-03-09T21:23:26.295606+0000 mon.c (mon.2) 130 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:23:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:26 vm10 bash[23387]: audit 2026-03-09T21:23:26.401578+0000 mon.a (mon.0) 1364 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:23:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:26 vm10 bash[23387]: audit 2026-03-09T21:23:26.401578+0000 mon.a (mon.0) 1364 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:23:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:26 vm10 bash[23387]: cluster 2026-03-09T21:23:26.413074+0000 mon.a (mon.0) 1365 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-09T21:23:26.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:26 vm10 bash[23387]: cluster 2026-03-09T21:23:26.413074+0000 mon.a (mon.0) 1365 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-09T21:23:26.692 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:23:26 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:23:26.864 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:26 vm07 bash[20771]: audit 2026-03-09T21:23:25.392279+0000 mon.a (mon.0) 1362 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:26 vm07 bash[20771]: audit 2026-03-09T21:23:25.392279+0000 mon.a (mon.0) 1362 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:26 vm07 bash[20771]: cluster 2026-03-09T21:23:25.403659+0000 mon.a (mon.0) 1363 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-09T21:23:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:26 vm07 bash[20771]: cluster 2026-03-09T21:23:25.403659+0000 mon.a (mon.0) 1363 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-09T21:23:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:26 vm07 bash[20771]: cluster 2026-03-09T21:23:25.802094+0000 mgr.y (mgr.24416) 278 : cluster [DBG] pgmap v499: 172 pgs: 8 unknown, 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:26 vm07 bash[20771]: cluster 2026-03-09T21:23:25.802094+0000 mgr.y (mgr.24416) 278 : cluster [DBG] pgmap v499: 172 pgs: 8 unknown, 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:26 vm07 bash[20771]: audit 2026-03-09T21:23:25.930305+0000 mon.c (mon.2) 128 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:23:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:26 vm07 bash[20771]: audit 2026-03-09T21:23:25.930305+0000 mon.c (mon.2) 128 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:23:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:26 vm07 bash[20771]: audit 2026-03-09T21:23:26.294431+0000 mon.c (mon.2) 129 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:23:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:26 vm07 bash[20771]: audit 2026-03-09T21:23:26.294431+0000 mon.c (mon.2) 129 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:23:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:26 vm07 bash[20771]: audit 2026-03-09T21:23:26.295606+0000 mon.c (mon.2) 130 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:23:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:26 vm07 bash[20771]: audit 2026-03-09T21:23:26.295606+0000 mon.c (mon.2) 130 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:23:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:26 vm07 bash[20771]: audit 2026-03-09T21:23:26.401578+0000 mon.a (mon.0) 1364 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:23:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:26 vm07 bash[20771]: audit 2026-03-09T21:23:26.401578+0000 mon.a (mon.0) 1364 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:23:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:26 vm07 bash[20771]: cluster 2026-03-09T21:23:26.413074+0000 mon.a (mon.0) 1365 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-09T21:23:26.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:26 vm07 bash[20771]: cluster 2026-03-09T21:23:26.413074+0000 mon.a (mon.0) 1365 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-09T21:23:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:26 vm07 bash[28052]: audit 2026-03-09T21:23:25.392279+0000 mon.a (mon.0) 1362 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:26 vm07 bash[28052]: audit 2026-03-09T21:23:25.392279+0000 mon.a (mon.0) 1362 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:26 vm07 bash[28052]: cluster 2026-03-09T21:23:25.403659+0000 mon.a (mon.0) 1363 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-09T21:23:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:26 vm07 bash[28052]: cluster 2026-03-09T21:23:25.403659+0000 mon.a (mon.0) 1363 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-09T21:23:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:26 vm07 bash[28052]: cluster 2026-03-09T21:23:25.802094+0000 mgr.y (mgr.24416) 278 : cluster [DBG] pgmap v499: 172 pgs: 8 unknown, 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:26 vm07 bash[28052]: cluster 2026-03-09T21:23:25.802094+0000 mgr.y (mgr.24416) 278 : cluster [DBG] pgmap v499: 172 pgs: 8 unknown, 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:26 vm07 bash[28052]: audit 2026-03-09T21:23:25.930305+0000 mon.c (mon.2) 128 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:23:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:26 vm07 bash[28052]: audit 2026-03-09T21:23:25.930305+0000 mon.c (mon.2) 128 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T21:23:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:26 vm07 bash[28052]: audit 2026-03-09T21:23:26.294431+0000 mon.c (mon.2) 129 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:23:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:26 vm07 bash[28052]: audit 2026-03-09T21:23:26.294431+0000 mon.c (mon.2) 129 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T21:23:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:26 vm07 bash[28052]: audit 2026-03-09T21:23:26.295606+0000 mon.c (mon.2) 130 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:23:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:26 vm07 bash[28052]: audit 2026-03-09T21:23:26.295606+0000 mon.c (mon.2) 130 : audit [INF] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T21:23:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:26 vm07 bash[28052]: audit 2026-03-09T21:23:26.401578+0000 mon.a (mon.0) 1364 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:23:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:26 vm07 bash[28052]: audit 2026-03-09T21:23:26.401578+0000 mon.a (mon.0) 1364 : audit [INF] from='mgr.24416 ' entity='mgr.y' 2026-03-09T21:23:26.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:26 vm07 bash[28052]: cluster 2026-03-09T21:23:26.413074+0000 mon.a (mon.0) 1365 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-09T21:23:26.866 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:26 vm07 bash[28052]: cluster 2026-03-09T21:23:26.413074+0000 mon.a (mon.0) 1365 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-09T21:23:27.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:27 vm07 bash[20771]: audit 2026-03-09T21:23:26.503892+0000 mgr.y (mgr.24416) 279 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:27.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:27 vm07 bash[20771]: audit 2026-03-09T21:23:26.503892+0000 mgr.y (mgr.24416) 279 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:27.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:27 vm07 bash[20771]: audit 2026-03-09T21:23:27.132321+0000 mon.c (mon.2) 131 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:23:27.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:27 vm07 bash[20771]: audit 2026-03-09T21:23:27.132321+0000 mon.c (mon.2) 131 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:23:27.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:27 vm07 bash[28052]: audit 2026-03-09T21:23:26.503892+0000 mgr.y (mgr.24416) 279 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:27.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:27 vm07 bash[28052]: audit 2026-03-09T21:23:26.503892+0000 mgr.y (mgr.24416) 279 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:27.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:27 vm07 bash[28052]: audit 2026-03-09T21:23:27.132321+0000 mon.c (mon.2) 131 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:23:27.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:27 vm07 bash[28052]: audit 2026-03-09T21:23:27.132321+0000 mon.c (mon.2) 131 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:23:27.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:27 vm10 bash[23387]: audit 2026-03-09T21:23:26.503892+0000 mgr.y (mgr.24416) 279 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:27.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:27 vm10 bash[23387]: audit 2026-03-09T21:23:26.503892+0000 mgr.y (mgr.24416) 279 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:27.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:27 vm10 bash[23387]: audit 2026-03-09T21:23:27.132321+0000 mon.c (mon.2) 131 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:23:27.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:27 vm10 bash[23387]: audit 2026-03-09T21:23:27.132321+0000 mon.c (mon.2) 131 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:23:28.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:28 vm10 bash[23387]: cluster 2026-03-09T21:23:27.573949+0000 mon.a (mon.0) 1366 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-09T21:23:28.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:28 vm10 bash[23387]: cluster 2026-03-09T21:23:27.573949+0000 mon.a (mon.0) 1366 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-09T21:23:28.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:28 vm10 bash[23387]: audit 2026-03-09T21:23:27.597840+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.107:0/1954996720' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:28.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:28 vm10 bash[23387]: audit 2026-03-09T21:23:27.597840+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.107:0/1954996720' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:28.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:28 vm10 bash[23387]: audit 2026-03-09T21:23:27.598588+0000 mon.a (mon.0) 1367 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:28.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:28 vm10 bash[23387]: audit 2026-03-09T21:23:27.598588+0000 mon.a (mon.0) 1367 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:28.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:28 vm10 bash[23387]: cluster 2026-03-09T21:23:27.802642+0000 mgr.y (mgr.24416) 280 : cluster [DBG] pgmap v502: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:28.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:28 vm10 bash[23387]: cluster 2026-03-09T21:23:27.802642+0000 mgr.y (mgr.24416) 280 : cluster [DBG] pgmap v502: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:29.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:28 vm07 bash[20771]: cluster 2026-03-09T21:23:27.573949+0000 mon.a (mon.0) 1366 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-09T21:23:29.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:28 vm07 bash[20771]: cluster 2026-03-09T21:23:27.573949+0000 mon.a (mon.0) 1366 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-09T21:23:29.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:28 vm07 bash[20771]: audit 2026-03-09T21:23:27.597840+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.107:0/1954996720' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:29.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:28 vm07 bash[20771]: audit 2026-03-09T21:23:27.597840+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.107:0/1954996720' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:29.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:28 vm07 bash[20771]: audit 2026-03-09T21:23:27.598588+0000 mon.a (mon.0) 1367 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:29.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:28 vm07 bash[20771]: audit 2026-03-09T21:23:27.598588+0000 mon.a (mon.0) 1367 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:29.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:28 vm07 bash[20771]: cluster 2026-03-09T21:23:27.802642+0000 mgr.y (mgr.24416) 280 : cluster [DBG] pgmap v502: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:29.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:28 vm07 bash[20771]: cluster 2026-03-09T21:23:27.802642+0000 mgr.y (mgr.24416) 280 : cluster [DBG] pgmap v502: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:29.073 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:23:28 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:23:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:23:29.073 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:28 vm07 bash[28052]: cluster 2026-03-09T21:23:27.573949+0000 mon.a (mon.0) 1366 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-09T21:23:29.073 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:28 vm07 bash[28052]: cluster 2026-03-09T21:23:27.573949+0000 mon.a (mon.0) 1366 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-09T21:23:29.073 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:28 vm07 bash[28052]: audit 2026-03-09T21:23:27.597840+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.107:0/1954996720' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:29.073 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:28 vm07 bash[28052]: audit 2026-03-09T21:23:27.597840+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.107:0/1954996720' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:29.073 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:28 vm07 bash[28052]: audit 2026-03-09T21:23:27.598588+0000 mon.a (mon.0) 1367 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:29.073 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:28 vm07 bash[28052]: audit 2026-03-09T21:23:27.598588+0000 mon.a (mon.0) 1367 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:29.073 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:28 vm07 bash[28052]: cluster 2026-03-09T21:23:27.802642+0000 mgr.y (mgr.24416) 280 : cluster [DBG] pgmap v502: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:29.073 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:28 vm07 bash[28052]: cluster 2026-03-09T21:23:27.802642+0000 mgr.y (mgr.24416) 280 : cluster [DBG] pgmap v502: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:29.653 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx2::test_get_last_version PASSED [ 89%] 2026-03-09T21:23:29.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:29 vm10 bash[23387]: audit 2026-03-09T21:23:28.627475+0000 mon.a (mon.0) 1368 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:29.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:29 vm10 bash[23387]: audit 2026-03-09T21:23:28.627475+0000 mon.a (mon.0) 1368 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:29.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:29 vm10 bash[23387]: cluster 2026-03-09T21:23:28.669916+0000 mon.a (mon.0) 1369 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-09T21:23:29.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:29 vm10 bash[23387]: cluster 2026-03-09T21:23:28.669916+0000 mon.a (mon.0) 1369 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-09T21:23:30.114 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:29 vm07 bash[20771]: audit 2026-03-09T21:23:28.627475+0000 mon.a (mon.0) 1368 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:30.114 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:29 vm07 bash[20771]: audit 2026-03-09T21:23:28.627475+0000 mon.a (mon.0) 1368 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:30.114 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:29 vm07 bash[20771]: cluster 2026-03-09T21:23:28.669916+0000 mon.a (mon.0) 1369 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-09T21:23:30.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:29 vm07 bash[20771]: cluster 2026-03-09T21:23:28.669916+0000 mon.a (mon.0) 1369 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-09T21:23:30.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:29 vm07 bash[28052]: audit 2026-03-09T21:23:28.627475+0000 mon.a (mon.0) 1368 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:30.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:29 vm07 bash[28052]: audit 2026-03-09T21:23:28.627475+0000 mon.a (mon.0) 1368 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:30.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:29 vm07 bash[28052]: cluster 2026-03-09T21:23:28.669916+0000 mon.a (mon.0) 1369 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-09T21:23:30.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:29 vm07 bash[28052]: cluster 2026-03-09T21:23:28.669916+0000 mon.a (mon.0) 1369 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-09T21:23:30.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:30 vm10 bash[23387]: cluster 2026-03-09T21:23:29.648571+0000 mon.a (mon.0) 1370 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-09T21:23:30.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:30 vm10 bash[23387]: cluster 2026-03-09T21:23:29.648571+0000 mon.a (mon.0) 1370 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-09T21:23:30.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:30 vm10 bash[23387]: cluster 2026-03-09T21:23:29.803022+0000 mgr.y (mgr.24416) 281 : cluster [DBG] pgmap v505: 164 pgs: 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:30.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:30 vm10 bash[23387]: cluster 2026-03-09T21:23:29.803022+0000 mgr.y (mgr.24416) 281 : cluster [DBG] pgmap v505: 164 pgs: 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:30.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:30 vm10 bash[23387]: cluster 2026-03-09T21:23:29.827644+0000 mon.a (mon.0) 1371 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:30.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:30 vm10 bash[23387]: cluster 2026-03-09T21:23:29.827644+0000 mon.a (mon.0) 1371 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:31.114 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:30 vm07 bash[20771]: cluster 2026-03-09T21:23:29.648571+0000 mon.a (mon.0) 1370 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-09T21:23:31.114 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:30 vm07 bash[20771]: cluster 2026-03-09T21:23:29.648571+0000 mon.a (mon.0) 1370 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-09T21:23:31.114 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:30 vm07 bash[20771]: cluster 2026-03-09T21:23:29.803022+0000 mgr.y (mgr.24416) 281 : cluster [DBG] pgmap v505: 164 pgs: 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:31.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:30 vm07 bash[20771]: cluster 2026-03-09T21:23:29.803022+0000 mgr.y (mgr.24416) 281 : cluster [DBG] pgmap v505: 164 pgs: 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:31.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:30 vm07 bash[20771]: cluster 2026-03-09T21:23:29.827644+0000 mon.a (mon.0) 1371 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:31.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:30 vm07 bash[20771]: cluster 2026-03-09T21:23:29.827644+0000 mon.a (mon.0) 1371 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:31.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:30 vm07 bash[28052]: cluster 2026-03-09T21:23:29.648571+0000 mon.a (mon.0) 1370 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-09T21:23:31.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:30 vm07 bash[28052]: cluster 2026-03-09T21:23:29.648571+0000 mon.a (mon.0) 1370 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-09T21:23:31.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:30 vm07 bash[28052]: cluster 2026-03-09T21:23:29.803022+0000 mgr.y (mgr.24416) 281 : cluster [DBG] pgmap v505: 164 pgs: 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:31.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:30 vm07 bash[28052]: cluster 2026-03-09T21:23:29.803022+0000 mgr.y (mgr.24416) 281 : cluster [DBG] pgmap v505: 164 pgs: 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:31.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:30 vm07 bash[28052]: cluster 2026-03-09T21:23:29.827644+0000 mon.a (mon.0) 1371 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:31.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:30 vm07 bash[28052]: cluster 2026-03-09T21:23:29.827644+0000 mon.a (mon.0) 1371 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:32.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:31 vm07 bash[20771]: cluster 2026-03-09T21:23:30.671020+0000 mon.a (mon.0) 1372 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-09T21:23:32.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:31 vm07 bash[20771]: cluster 2026-03-09T21:23:30.671020+0000 mon.a (mon.0) 1372 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-09T21:23:32.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:31 vm07 bash[20771]: audit 2026-03-09T21:23:30.676662+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.107:0/4070971035' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:32.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:31 vm07 bash[20771]: audit 2026-03-09T21:23:30.676662+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.107:0/4070971035' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:32.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:31 vm07 bash[20771]: audit 2026-03-09T21:23:30.676893+0000 mon.a (mon.0) 1373 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:32.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:31 vm07 bash[20771]: audit 2026-03-09T21:23:30.676893+0000 mon.a (mon.0) 1373 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:32.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:31 vm07 bash[28052]: cluster 2026-03-09T21:23:30.671020+0000 mon.a (mon.0) 1372 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-09T21:23:32.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:31 vm07 bash[28052]: cluster 2026-03-09T21:23:30.671020+0000 mon.a (mon.0) 1372 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-09T21:23:32.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:31 vm07 bash[28052]: audit 2026-03-09T21:23:30.676662+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.107:0/4070971035' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:32.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:31 vm07 bash[28052]: audit 2026-03-09T21:23:30.676662+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.107:0/4070971035' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:32.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:31 vm07 bash[28052]: audit 2026-03-09T21:23:30.676893+0000 mon.a (mon.0) 1373 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:32.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:31 vm07 bash[28052]: audit 2026-03-09T21:23:30.676893+0000 mon.a (mon.0) 1373 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:32.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:31 vm10 bash[23387]: cluster 2026-03-09T21:23:30.671020+0000 mon.a (mon.0) 1372 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-09T21:23:32.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:31 vm10 bash[23387]: cluster 2026-03-09T21:23:30.671020+0000 mon.a (mon.0) 1372 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-09T21:23:32.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:31 vm10 bash[23387]: audit 2026-03-09T21:23:30.676662+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.107:0/4070971035' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:32.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:31 vm10 bash[23387]: audit 2026-03-09T21:23:30.676662+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.107:0/4070971035' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:32.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:31 vm10 bash[23387]: audit 2026-03-09T21:23:30.676893+0000 mon.a (mon.0) 1373 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:32.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:31 vm10 bash[23387]: audit 2026-03-09T21:23:30.676893+0000 mon.a (mon.0) 1373 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:32.862 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx2::test_get_stats PASSED [ 90%] 2026-03-09T21:23:33.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:32 vm10 bash[23387]: audit 2026-03-09T21:23:31.707006+0000 mon.a (mon.0) 1374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:33.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:32 vm10 bash[23387]: audit 2026-03-09T21:23:31.707006+0000 mon.a (mon.0) 1374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:33.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:32 vm10 bash[23387]: cluster 2026-03-09T21:23:31.720624+0000 mon.a (mon.0) 1375 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-09T21:23:33.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:32 vm10 bash[23387]: cluster 2026-03-09T21:23:31.720624+0000 mon.a (mon.0) 1375 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-09T21:23:33.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:32 vm10 bash[23387]: cluster 2026-03-09T21:23:31.803313+0000 mgr.y (mgr.24416) 282 : cluster [DBG] pgmap v508: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:23:33.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:32 vm10 bash[23387]: cluster 2026-03-09T21:23:31.803313+0000 mgr.y (mgr.24416) 282 : cluster [DBG] pgmap v508: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:23:33.364 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:32 vm07 bash[20771]: audit 2026-03-09T21:23:31.707006+0000 mon.a (mon.0) 1374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:33.364 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:32 vm07 bash[20771]: audit 2026-03-09T21:23:31.707006+0000 mon.a (mon.0) 1374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:33.364 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:32 vm07 bash[20771]: cluster 2026-03-09T21:23:31.720624+0000 mon.a (mon.0) 1375 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-09T21:23:33.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:32 vm07 bash[20771]: cluster 2026-03-09T21:23:31.720624+0000 mon.a (mon.0) 1375 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-09T21:23:33.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:32 vm07 bash[20771]: cluster 2026-03-09T21:23:31.803313+0000 mgr.y (mgr.24416) 282 : cluster [DBG] pgmap v508: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:23:33.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:32 vm07 bash[20771]: cluster 2026-03-09T21:23:31.803313+0000 mgr.y (mgr.24416) 282 : cluster [DBG] pgmap v508: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:23:33.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:32 vm07 bash[28052]: audit 2026-03-09T21:23:31.707006+0000 mon.a (mon.0) 1374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:33.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:32 vm07 bash[28052]: audit 2026-03-09T21:23:31.707006+0000 mon.a (mon.0) 1374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:33.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:32 vm07 bash[28052]: cluster 2026-03-09T21:23:31.720624+0000 mon.a (mon.0) 1375 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-09T21:23:33.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:32 vm07 bash[28052]: cluster 2026-03-09T21:23:31.720624+0000 mon.a (mon.0) 1375 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-09T21:23:33.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:32 vm07 bash[28052]: cluster 2026-03-09T21:23:31.803313+0000 mgr.y (mgr.24416) 282 : cluster [DBG] pgmap v508: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:23:33.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:32 vm07 bash[28052]: cluster 2026-03-09T21:23:31.803313+0000 mgr.y (mgr.24416) 282 : cluster [DBG] pgmap v508: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:23:34.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:33 vm10 bash[23387]: cluster 2026-03-09T21:23:32.804919+0000 mon.a (mon.0) 1376 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-09T21:23:34.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:33 vm10 bash[23387]: cluster 2026-03-09T21:23:32.804919+0000 mon.a (mon.0) 1376 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-09T21:23:34.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:33 vm10 bash[23387]: cluster 2026-03-09T21:23:33.807784+0000 mon.a (mon.0) 1377 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-09T21:23:34.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:33 vm10 bash[23387]: cluster 2026-03-09T21:23:33.807784+0000 mon.a (mon.0) 1377 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-09T21:23:34.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:33 vm07 bash[20771]: cluster 2026-03-09T21:23:32.804919+0000 mon.a (mon.0) 1376 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-09T21:23:34.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:33 vm07 bash[20771]: cluster 2026-03-09T21:23:32.804919+0000 mon.a (mon.0) 1376 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-09T21:23:34.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:33 vm07 bash[20771]: cluster 2026-03-09T21:23:33.807784+0000 mon.a (mon.0) 1377 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-09T21:23:34.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:33 vm07 bash[20771]: cluster 2026-03-09T21:23:33.807784+0000 mon.a (mon.0) 1377 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-09T21:23:34.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:33 vm07 bash[28052]: cluster 2026-03-09T21:23:32.804919+0000 mon.a (mon.0) 1376 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-09T21:23:34.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:33 vm07 bash[28052]: cluster 2026-03-09T21:23:32.804919+0000 mon.a (mon.0) 1376 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-09T21:23:34.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:33 vm07 bash[28052]: cluster 2026-03-09T21:23:33.807784+0000 mon.a (mon.0) 1377 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-09T21:23:34.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:33 vm07 bash[28052]: cluster 2026-03-09T21:23:33.807784+0000 mon.a (mon.0) 1377 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-09T21:23:35.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:34 vm10 bash[23387]: cluster 2026-03-09T21:23:33.803591+0000 mgr.y (mgr.24416) 283 : cluster [DBG] pgmap v511: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:35.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:34 vm10 bash[23387]: cluster 2026-03-09T21:23:33.803591+0000 mgr.y (mgr.24416) 283 : cluster [DBG] pgmap v511: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:35.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:34 vm10 bash[23387]: cluster 2026-03-09T21:23:34.821298+0000 mon.a (mon.0) 1378 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-09T21:23:35.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:34 vm10 bash[23387]: cluster 2026-03-09T21:23:34.821298+0000 mon.a (mon.0) 1378 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-09T21:23:35.364 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:34 vm07 bash[20771]: cluster 2026-03-09T21:23:33.803591+0000 mgr.y (mgr.24416) 283 : cluster [DBG] pgmap v511: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:35.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:34 vm07 bash[20771]: cluster 2026-03-09T21:23:33.803591+0000 mgr.y (mgr.24416) 283 : cluster [DBG] pgmap v511: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:35.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:34 vm07 bash[20771]: cluster 2026-03-09T21:23:34.821298+0000 mon.a (mon.0) 1378 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-09T21:23:35.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:34 vm07 bash[20771]: cluster 2026-03-09T21:23:34.821298+0000 mon.a (mon.0) 1378 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-09T21:23:35.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:34 vm07 bash[28052]: cluster 2026-03-09T21:23:33.803591+0000 mgr.y (mgr.24416) 283 : cluster [DBG] pgmap v511: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:35.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:34 vm07 bash[28052]: cluster 2026-03-09T21:23:33.803591+0000 mgr.y (mgr.24416) 283 : cluster [DBG] pgmap v511: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:35.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:34 vm07 bash[28052]: cluster 2026-03-09T21:23:34.821298+0000 mon.a (mon.0) 1378 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-09T21:23:35.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:34 vm07 bash[28052]: cluster 2026-03-09T21:23:34.821298+0000 mon.a (mon.0) 1378 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-09T21:23:35.825 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestObject::test_read PASSED [ 91%] 2026-03-09T21:23:36.845 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:23:36 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:23:37.114 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:36 vm07 bash[20771]: cluster 2026-03-09T21:23:35.803854+0000 mgr.y (mgr.24416) 284 : cluster [DBG] pgmap v513: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:37.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:36 vm07 bash[20771]: cluster 2026-03-09T21:23:35.803854+0000 mgr.y (mgr.24416) 284 : cluster [DBG] pgmap v513: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:37.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:36 vm07 bash[20771]: cluster 2026-03-09T21:23:35.816802+0000 mon.a (mon.0) 1379 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-09T21:23:37.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:36 vm07 bash[20771]: cluster 2026-03-09T21:23:35.816802+0000 mon.a (mon.0) 1379 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-09T21:23:37.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:36 vm07 bash[28052]: cluster 2026-03-09T21:23:35.803854+0000 mgr.y (mgr.24416) 284 : cluster [DBG] pgmap v513: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:37.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:36 vm07 bash[28052]: cluster 2026-03-09T21:23:35.803854+0000 mgr.y (mgr.24416) 284 : cluster [DBG] pgmap v513: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:37.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:36 vm07 bash[28052]: cluster 2026-03-09T21:23:35.816802+0000 mon.a (mon.0) 1379 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-09T21:23:37.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:36 vm07 bash[28052]: cluster 2026-03-09T21:23:35.816802+0000 mon.a (mon.0) 1379 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-09T21:23:37.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:36 vm10 bash[23387]: cluster 2026-03-09T21:23:35.803854+0000 mgr.y (mgr.24416) 284 : cluster [DBG] pgmap v513: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:37.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:36 vm10 bash[23387]: cluster 2026-03-09T21:23:35.803854+0000 mgr.y (mgr.24416) 284 : cluster [DBG] pgmap v513: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:37.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:36 vm10 bash[23387]: cluster 2026-03-09T21:23:35.816802+0000 mon.a (mon.0) 1379 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-09T21:23:37.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:36 vm10 bash[23387]: cluster 2026-03-09T21:23:35.816802+0000 mon.a (mon.0) 1379 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-09T21:23:38.364 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:37 vm07 bash[20771]: audit 2026-03-09T21:23:36.506616+0000 mgr.y (mgr.24416) 285 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:38.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:37 vm07 bash[20771]: audit 2026-03-09T21:23:36.506616+0000 mgr.y (mgr.24416) 285 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:38.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:37 vm07 bash[20771]: cluster 2026-03-09T21:23:36.862658+0000 mon.a (mon.0) 1380 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-09T21:23:38.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:37 vm07 bash[20771]: cluster 2026-03-09T21:23:36.862658+0000 mon.a (mon.0) 1380 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-09T21:23:38.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:37 vm07 bash[28052]: audit 2026-03-09T21:23:36.506616+0000 mgr.y (mgr.24416) 285 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:38.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:37 vm07 bash[28052]: audit 2026-03-09T21:23:36.506616+0000 mgr.y (mgr.24416) 285 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:38.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:37 vm07 bash[28052]: cluster 2026-03-09T21:23:36.862658+0000 mon.a (mon.0) 1380 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-09T21:23:38.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:37 vm07 bash[28052]: cluster 2026-03-09T21:23:36.862658+0000 mon.a (mon.0) 1380 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-09T21:23:38.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:37 vm10 bash[23387]: audit 2026-03-09T21:23:36.506616+0000 mgr.y (mgr.24416) 285 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:38.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:37 vm10 bash[23387]: audit 2026-03-09T21:23:36.506616+0000 mgr.y (mgr.24416) 285 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:38.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:37 vm10 bash[23387]: cluster 2026-03-09T21:23:36.862658+0000 mon.a (mon.0) 1380 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-09T21:23:38.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:37 vm10 bash[23387]: cluster 2026-03-09T21:23:36.862658+0000 mon.a (mon.0) 1380 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-09T21:23:38.985 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:23:38 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:23:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:23:39.005 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestObject::test_seek PASSED [ 92%] 2026-03-09T21:23:39.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:38 vm07 bash[20771]: cluster 2026-03-09T21:23:37.804426+0000 mgr.y (mgr.24416) 286 : cluster [DBG] pgmap v516: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:39.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:38 vm07 bash[20771]: cluster 2026-03-09T21:23:37.804426+0000 mgr.y (mgr.24416) 286 : cluster [DBG] pgmap v516: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:39.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:38 vm07 bash[20771]: cluster 2026-03-09T21:23:38.004205+0000 mon.a (mon.0) 1381 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-09T21:23:39.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:38 vm07 bash[20771]: cluster 2026-03-09T21:23:38.004205+0000 mon.a (mon.0) 1381 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-09T21:23:39.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:38 vm07 bash[28052]: cluster 2026-03-09T21:23:37.804426+0000 mgr.y (mgr.24416) 286 : cluster [DBG] pgmap v516: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:39.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:38 vm07 bash[28052]: cluster 2026-03-09T21:23:37.804426+0000 mgr.y (mgr.24416) 286 : cluster [DBG] pgmap v516: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:39.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:38 vm07 bash[28052]: cluster 2026-03-09T21:23:38.004205+0000 mon.a (mon.0) 1381 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-09T21:23:39.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:38 vm07 bash[28052]: cluster 2026-03-09T21:23:38.004205+0000 mon.a (mon.0) 1381 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-09T21:23:39.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:38 vm10 bash[23387]: cluster 2026-03-09T21:23:37.804426+0000 mgr.y (mgr.24416) 286 : cluster [DBG] pgmap v516: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:39.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:38 vm10 bash[23387]: cluster 2026-03-09T21:23:37.804426+0000 mgr.y (mgr.24416) 286 : cluster [DBG] pgmap v516: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:39.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:38 vm10 bash[23387]: cluster 2026-03-09T21:23:38.004205+0000 mon.a (mon.0) 1381 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-09T21:23:39.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:38 vm10 bash[23387]: cluster 2026-03-09T21:23:38.004205+0000 mon.a (mon.0) 1381 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-09T21:23:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:40 vm07 bash[20771]: cluster 2026-03-09T21:23:39.002409+0000 mon.a (mon.0) 1382 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-09T21:23:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:40 vm07 bash[20771]: cluster 2026-03-09T21:23:39.002409+0000 mon.a (mon.0) 1382 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-09T21:23:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:40 vm07 bash[20771]: cluster 2026-03-09T21:23:39.804688+0000 mgr.y (mgr.24416) 287 : cluster [DBG] pgmap v519: 164 pgs: 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:40.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:40 vm07 bash[20771]: cluster 2026-03-09T21:23:39.804688+0000 mgr.y (mgr.24416) 287 : cluster [DBG] pgmap v519: 164 pgs: 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:40 vm07 bash[28052]: cluster 2026-03-09T21:23:39.002409+0000 mon.a (mon.0) 1382 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-09T21:23:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:40 vm07 bash[28052]: cluster 2026-03-09T21:23:39.002409+0000 mon.a (mon.0) 1382 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-09T21:23:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:40 vm07 bash[28052]: cluster 2026-03-09T21:23:39.804688+0000 mgr.y (mgr.24416) 287 : cluster [DBG] pgmap v519: 164 pgs: 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:40.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:40 vm07 bash[28052]: cluster 2026-03-09T21:23:39.804688+0000 mgr.y (mgr.24416) 287 : cluster [DBG] pgmap v519: 164 pgs: 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:40 vm10 bash[23387]: cluster 2026-03-09T21:23:39.002409+0000 mon.a (mon.0) 1382 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-09T21:23:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:40 vm10 bash[23387]: cluster 2026-03-09T21:23:39.002409+0000 mon.a (mon.0) 1382 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-09T21:23:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:40 vm10 bash[23387]: cluster 2026-03-09T21:23:39.804688+0000 mgr.y (mgr.24416) 287 : cluster [DBG] pgmap v519: 164 pgs: 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:40.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:40 vm10 bash[23387]: cluster 2026-03-09T21:23:39.804688+0000 mgr.y (mgr.24416) 287 : cluster [DBG] pgmap v519: 164 pgs: 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:41.614 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:41 vm07 bash[20771]: cluster 2026-03-09T21:23:39.994556+0000 mon.a (mon.0) 1383 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:41.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:41 vm07 bash[20771]: cluster 2026-03-09T21:23:39.994556+0000 mon.a (mon.0) 1383 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:41.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:41 vm07 bash[20771]: cluster 2026-03-09T21:23:40.019396+0000 mon.a (mon.0) 1384 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-09T21:23:41.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:41 vm07 bash[20771]: cluster 2026-03-09T21:23:40.019396+0000 mon.a (mon.0) 1384 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-09T21:23:41.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:41 vm07 bash[28052]: cluster 2026-03-09T21:23:39.994556+0000 mon.a (mon.0) 1383 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:41.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:41 vm07 bash[28052]: cluster 2026-03-09T21:23:39.994556+0000 mon.a (mon.0) 1383 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:41.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:41 vm07 bash[28052]: cluster 2026-03-09T21:23:40.019396+0000 mon.a (mon.0) 1384 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-09T21:23:41.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:41 vm07 bash[28052]: cluster 2026-03-09T21:23:40.019396+0000 mon.a (mon.0) 1384 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-09T21:23:41.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:41 vm10 bash[23387]: cluster 2026-03-09T21:23:39.994556+0000 mon.a (mon.0) 1383 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:41.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:41 vm10 bash[23387]: cluster 2026-03-09T21:23:39.994556+0000 mon.a (mon.0) 1383 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:41.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:41 vm10 bash[23387]: cluster 2026-03-09T21:23:40.019396+0000 mon.a (mon.0) 1384 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-09T21:23:41.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:41 vm10 bash[23387]: cluster 2026-03-09T21:23:40.019396+0000 mon.a (mon.0) 1384 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-09T21:23:42.257 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestObject::test_write PASSED [ 93%] 2026-03-09T21:23:42.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:42 vm07 bash[20771]: cluster 2026-03-09T21:23:41.171545+0000 mon.a (mon.0) 1385 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-09T21:23:42.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:42 vm07 bash[20771]: cluster 2026-03-09T21:23:41.171545+0000 mon.a (mon.0) 1385 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-09T21:23:42.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:42 vm07 bash[20771]: cluster 2026-03-09T21:23:41.804936+0000 mgr.y (mgr.24416) 288 : cluster [DBG] pgmap v522: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:23:42.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:42 vm07 bash[20771]: cluster 2026-03-09T21:23:41.804936+0000 mgr.y (mgr.24416) 288 : cluster [DBG] pgmap v522: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:23:42.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:42 vm07 bash[28052]: cluster 2026-03-09T21:23:41.171545+0000 mon.a (mon.0) 1385 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-09T21:23:42.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:42 vm07 bash[28052]: cluster 2026-03-09T21:23:41.171545+0000 mon.a (mon.0) 1385 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-09T21:23:42.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:42 vm07 bash[28052]: cluster 2026-03-09T21:23:41.804936+0000 mgr.y (mgr.24416) 288 : cluster [DBG] pgmap v522: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:23:42.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:42 vm07 bash[28052]: cluster 2026-03-09T21:23:41.804936+0000 mgr.y (mgr.24416) 288 : cluster [DBG] pgmap v522: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:23:42.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:42 vm10 bash[23387]: cluster 2026-03-09T21:23:41.171545+0000 mon.a (mon.0) 1385 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-09T21:23:42.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:42 vm10 bash[23387]: cluster 2026-03-09T21:23:41.171545+0000 mon.a (mon.0) 1385 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-09T21:23:42.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:42 vm10 bash[23387]: cluster 2026-03-09T21:23:41.804936+0000 mgr.y (mgr.24416) 288 : cluster [DBG] pgmap v522: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:23:42.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:42 vm10 bash[23387]: cluster 2026-03-09T21:23:41.804936+0000 mgr.y (mgr.24416) 288 : cluster [DBG] pgmap v522: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:23:43.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:43 vm07 bash[20771]: audit 2026-03-09T21:23:42.159291+0000 mon.c (mon.2) 133 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:23:43.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:43 vm07 bash[20771]: audit 2026-03-09T21:23:42.159291+0000 mon.c (mon.2) 133 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:23:43.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:43 vm07 bash[20771]: cluster 2026-03-09T21:23:42.251301+0000 mon.a (mon.0) 1386 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-09T21:23:43.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:43 vm07 bash[20771]: cluster 2026-03-09T21:23:42.251301+0000 mon.a (mon.0) 1386 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-09T21:23:43.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:43 vm07 bash[28052]: audit 2026-03-09T21:23:42.159291+0000 mon.c (mon.2) 133 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:23:43.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:43 vm07 bash[28052]: audit 2026-03-09T21:23:42.159291+0000 mon.c (mon.2) 133 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:23:43.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:43 vm07 bash[28052]: cluster 2026-03-09T21:23:42.251301+0000 mon.a (mon.0) 1386 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-09T21:23:43.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:43 vm07 bash[28052]: cluster 2026-03-09T21:23:42.251301+0000 mon.a (mon.0) 1386 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-09T21:23:43.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:43 vm10 bash[23387]: audit 2026-03-09T21:23:42.159291+0000 mon.c (mon.2) 133 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:23:43.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:43 vm10 bash[23387]: audit 2026-03-09T21:23:42.159291+0000 mon.c (mon.2) 133 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:23:43.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:43 vm10 bash[23387]: cluster 2026-03-09T21:23:42.251301+0000 mon.a (mon.0) 1386 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-09T21:23:43.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:43 vm10 bash[23387]: cluster 2026-03-09T21:23:42.251301+0000 mon.a (mon.0) 1386 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-09T21:23:44.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:44 vm07 bash[20771]: cluster 2026-03-09T21:23:43.279527+0000 mon.a (mon.0) 1387 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-09T21:23:44.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:44 vm07 bash[20771]: cluster 2026-03-09T21:23:43.279527+0000 mon.a (mon.0) 1387 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-09T21:23:44.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:44 vm07 bash[20771]: cluster 2026-03-09T21:23:43.805194+0000 mgr.y (mgr.24416) 289 : cluster [DBG] pgmap v525: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:44.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:44 vm07 bash[20771]: cluster 2026-03-09T21:23:43.805194+0000 mgr.y (mgr.24416) 289 : cluster [DBG] pgmap v525: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:44.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:44 vm07 bash[20771]: cluster 2026-03-09T21:23:44.281525+0000 mon.a (mon.0) 1388 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-09T21:23:44.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:44 vm07 bash[20771]: cluster 2026-03-09T21:23:44.281525+0000 mon.a (mon.0) 1388 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-09T21:23:44.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:44 vm07 bash[28052]: cluster 2026-03-09T21:23:43.279527+0000 mon.a (mon.0) 1387 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-09T21:23:44.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:44 vm07 bash[28052]: cluster 2026-03-09T21:23:43.279527+0000 mon.a (mon.0) 1387 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-09T21:23:44.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:44 vm07 bash[28052]: cluster 2026-03-09T21:23:43.805194+0000 mgr.y (mgr.24416) 289 : cluster [DBG] pgmap v525: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:44.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:44 vm07 bash[28052]: cluster 2026-03-09T21:23:43.805194+0000 mgr.y (mgr.24416) 289 : cluster [DBG] pgmap v525: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:44.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:44 vm07 bash[28052]: cluster 2026-03-09T21:23:44.281525+0000 mon.a (mon.0) 1388 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-09T21:23:44.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:44 vm07 bash[28052]: cluster 2026-03-09T21:23:44.281525+0000 mon.a (mon.0) 1388 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-09T21:23:44.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:44 vm10 bash[23387]: cluster 2026-03-09T21:23:43.279527+0000 mon.a (mon.0) 1387 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-09T21:23:44.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:44 vm10 bash[23387]: cluster 2026-03-09T21:23:43.279527+0000 mon.a (mon.0) 1387 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-09T21:23:44.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:44 vm10 bash[23387]: cluster 2026-03-09T21:23:43.805194+0000 mgr.y (mgr.24416) 289 : cluster [DBG] pgmap v525: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:44.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:44 vm10 bash[23387]: cluster 2026-03-09T21:23:43.805194+0000 mgr.y (mgr.24416) 289 : cluster [DBG] pgmap v525: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:44.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:44 vm10 bash[23387]: cluster 2026-03-09T21:23:44.281525+0000 mon.a (mon.0) 1388 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-09T21:23:44.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:44 vm10 bash[23387]: cluster 2026-03-09T21:23:44.281525+0000 mon.a (mon.0) 1388 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-09T21:23:46.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:46 vm07 bash[20771]: cluster 2026-03-09T21:23:45.280639+0000 mon.a (mon.0) 1389 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-09T21:23:46.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:46 vm07 bash[20771]: cluster 2026-03-09T21:23:45.280639+0000 mon.a (mon.0) 1389 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-09T21:23:46.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:46 vm07 bash[20771]: cluster 2026-03-09T21:23:45.805593+0000 mgr.y (mgr.24416) 290 : cluster [DBG] pgmap v528: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:46.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:46 vm07 bash[20771]: cluster 2026-03-09T21:23:45.805593+0000 mgr.y (mgr.24416) 290 : cluster [DBG] pgmap v528: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:46.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:46 vm07 bash[28052]: cluster 2026-03-09T21:23:45.280639+0000 mon.a (mon.0) 1389 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-09T21:23:46.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:46 vm07 bash[28052]: cluster 2026-03-09T21:23:45.280639+0000 mon.a (mon.0) 1389 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-09T21:23:46.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:46 vm07 bash[28052]: cluster 2026-03-09T21:23:45.805593+0000 mgr.y (mgr.24416) 290 : cluster [DBG] pgmap v528: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:46.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:46 vm07 bash[28052]: cluster 2026-03-09T21:23:45.805593+0000 mgr.y (mgr.24416) 290 : cluster [DBG] pgmap v528: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:46.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:46 vm10 bash[23387]: cluster 2026-03-09T21:23:45.280639+0000 mon.a (mon.0) 1389 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-09T21:23:46.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:46 vm10 bash[23387]: cluster 2026-03-09T21:23:45.280639+0000 mon.a (mon.0) 1389 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-09T21:23:46.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:46 vm10 bash[23387]: cluster 2026-03-09T21:23:45.805593+0000 mgr.y (mgr.24416) 290 : cluster [DBG] pgmap v528: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:46.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:46 vm10 bash[23387]: cluster 2026-03-09T21:23:45.805593+0000 mgr.y (mgr.24416) 290 : cluster [DBG] pgmap v528: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:46.692 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:23:46 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:23:47.864 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:47 vm07 bash[20771]: cluster 2026-03-09T21:23:46.287674+0000 mon.a (mon.0) 1390 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-09T21:23:47.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:47 vm07 bash[20771]: cluster 2026-03-09T21:23:46.287674+0000 mon.a (mon.0) 1390 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-09T21:23:47.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:47 vm07 bash[20771]: audit 2026-03-09T21:23:46.517354+0000 mgr.y (mgr.24416) 291 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:47.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:47 vm07 bash[20771]: audit 2026-03-09T21:23:46.517354+0000 mgr.y (mgr.24416) 291 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:47.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:47 vm07 bash[28052]: cluster 2026-03-09T21:23:46.287674+0000 mon.a (mon.0) 1390 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-09T21:23:47.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:47 vm07 bash[28052]: cluster 2026-03-09T21:23:46.287674+0000 mon.a (mon.0) 1390 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-09T21:23:47.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:47 vm07 bash[28052]: audit 2026-03-09T21:23:46.517354+0000 mgr.y (mgr.24416) 291 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:47.865 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:47 vm07 bash[28052]: audit 2026-03-09T21:23:46.517354+0000 mgr.y (mgr.24416) 291 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:47.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:47 vm10 bash[23387]: cluster 2026-03-09T21:23:46.287674+0000 mon.a (mon.0) 1390 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-09T21:23:47.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:47 vm10 bash[23387]: cluster 2026-03-09T21:23:46.287674+0000 mon.a (mon.0) 1390 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-09T21:23:47.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:47 vm10 bash[23387]: audit 2026-03-09T21:23:46.517354+0000 mgr.y (mgr.24416) 291 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:47.942 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:47 vm10 bash[23387]: audit 2026-03-09T21:23:46.517354+0000 mgr.y (mgr.24416) 291 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:49.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:48 vm07 bash[20771]: cluster 2026-03-09T21:23:47.565733+0000 mon.a (mon.0) 1391 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-09T21:23:49.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:48 vm07 bash[20771]: cluster 2026-03-09T21:23:47.565733+0000 mon.a (mon.0) 1391 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-09T21:23:49.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:48 vm07 bash[20771]: cluster 2026-03-09T21:23:47.806145+0000 mgr.y (mgr.24416) 292 : cluster [DBG] pgmap v531: 196 pgs: 196 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T21:23:49.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:48 vm07 bash[20771]: cluster 2026-03-09T21:23:47.806145+0000 mgr.y (mgr.24416) 292 : cluster [DBG] pgmap v531: 196 pgs: 196 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T21:23:49.115 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:23:48 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:23:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:23:49.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:48 vm07 bash[28052]: cluster 2026-03-09T21:23:47.565733+0000 mon.a (mon.0) 1391 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-09T21:23:49.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:48 vm07 bash[28052]: cluster 2026-03-09T21:23:47.565733+0000 mon.a (mon.0) 1391 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-09T21:23:49.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:48 vm07 bash[28052]: cluster 2026-03-09T21:23:47.806145+0000 mgr.y (mgr.24416) 292 : cluster [DBG] pgmap v531: 196 pgs: 196 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T21:23:49.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:48 vm07 bash[28052]: cluster 2026-03-09T21:23:47.806145+0000 mgr.y (mgr.24416) 292 : cluster [DBG] pgmap v531: 196 pgs: 196 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T21:23:49.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:48 vm10 bash[23387]: cluster 2026-03-09T21:23:47.565733+0000 mon.a (mon.0) 1391 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-09T21:23:49.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:48 vm10 bash[23387]: cluster 2026-03-09T21:23:47.565733+0000 mon.a (mon.0) 1391 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-09T21:23:49.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:48 vm10 bash[23387]: cluster 2026-03-09T21:23:47.806145+0000 mgr.y (mgr.24416) 292 : cluster [DBG] pgmap v531: 196 pgs: 196 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T21:23:49.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:48 vm10 bash[23387]: cluster 2026-03-09T21:23:47.806145+0000 mgr.y (mgr.24416) 292 : cluster [DBG] pgmap v531: 196 pgs: 196 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T21:23:50.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:49 vm07 bash[20771]: cluster 2026-03-09T21:23:48.717412+0000 mon.a (mon.0) 1392 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-09T21:23:50.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:49 vm07 bash[20771]: cluster 2026-03-09T21:23:48.717412+0000 mon.a (mon.0) 1392 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-09T21:23:50.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:49 vm07 bash[20771]: audit 2026-03-09T21:23:48.718228+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.107:0/2047711098' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:50.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:49 vm07 bash[20771]: audit 2026-03-09T21:23:48.718228+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.107:0/2047711098' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:50.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:49 vm07 bash[20771]: audit 2026-03-09T21:23:48.718858+0000 mon.a (mon.0) 1393 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:50.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:49 vm07 bash[20771]: audit 2026-03-09T21:23:48.718858+0000 mon.a (mon.0) 1393 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:49 vm07 bash[28052]: cluster 2026-03-09T21:23:48.717412+0000 mon.a (mon.0) 1392 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-09T21:23:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:49 vm07 bash[28052]: cluster 2026-03-09T21:23:48.717412+0000 mon.a (mon.0) 1392 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-09T21:23:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:49 vm07 bash[28052]: audit 2026-03-09T21:23:48.718228+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.107:0/2047711098' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:49 vm07 bash[28052]: audit 2026-03-09T21:23:48.718228+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.107:0/2047711098' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:49 vm07 bash[28052]: audit 2026-03-09T21:23:48.718858+0000 mon.a (mon.0) 1393 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:50.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:49 vm07 bash[28052]: audit 2026-03-09T21:23:48.718858+0000 mon.a (mon.0) 1393 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:49 vm10 bash[23387]: cluster 2026-03-09T21:23:48.717412+0000 mon.a (mon.0) 1392 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-09T21:23:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:49 vm10 bash[23387]: cluster 2026-03-09T21:23:48.717412+0000 mon.a (mon.0) 1392 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-09T21:23:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:49 vm10 bash[23387]: audit 2026-03-09T21:23:48.718228+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.107:0/2047711098' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:49 vm10 bash[23387]: audit 2026-03-09T21:23:48.718228+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.107:0/2047711098' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:49 vm10 bash[23387]: audit 2026-03-09T21:23:48.718858+0000 mon.a (mon.0) 1393 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:50.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:49 vm10 bash[23387]: audit 2026-03-09T21:23:48.718858+0000 mon.a (mon.0) 1393 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T21:23:50.723 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoCtxSelfManagedSnaps::test PASSED [ 94%] 2026-03-09T21:23:50.754 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestCommand::test_monmap_dump PASSED [ 95%] 2026-03-09T21:23:50.768 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestCommand::test_osd_bench PASSED [ 96%] 2026-03-09T21:23:51.114 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:50 vm07 bash[20771]: audit 2026-03-09T21:23:49.708956+0000 mon.a (mon.0) 1394 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:51.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:50 vm07 bash[20771]: audit 2026-03-09T21:23:49.708956+0000 mon.a (mon.0) 1394 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:51.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:50 vm07 bash[20771]: cluster 2026-03-09T21:23:49.712618+0000 mon.a (mon.0) 1395 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-09T21:23:51.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:50 vm07 bash[20771]: cluster 2026-03-09T21:23:49.712618+0000 mon.a (mon.0) 1395 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-09T21:23:51.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:50 vm07 bash[20771]: cluster 2026-03-09T21:23:49.806432+0000 mgr.y (mgr.24416) 293 : cluster [DBG] pgmap v534: 196 pgs: 196 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T21:23:51.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:50 vm07 bash[20771]: cluster 2026-03-09T21:23:49.806432+0000 mgr.y (mgr.24416) 293 : cluster [DBG] pgmap v534: 196 pgs: 196 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T21:23:51.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:50 vm07 bash[20771]: cluster 2026-03-09T21:23:49.830265+0000 mon.a (mon.0) 1396 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:51.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:50 vm07 bash[20771]: cluster 2026-03-09T21:23:49.830265+0000 mon.a (mon.0) 1396 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:51.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:50 vm07 bash[20771]: cluster 2026-03-09T21:23:50.717627+0000 mon.a (mon.0) 1397 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-09T21:23:51.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:50 vm07 bash[20771]: cluster 2026-03-09T21:23:50.717627+0000 mon.a (mon.0) 1397 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-09T21:23:51.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:50 vm07 bash[28052]: audit 2026-03-09T21:23:49.708956+0000 mon.a (mon.0) 1394 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:51.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:50 vm07 bash[28052]: audit 2026-03-09T21:23:49.708956+0000 mon.a (mon.0) 1394 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:51.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:50 vm07 bash[28052]: cluster 2026-03-09T21:23:49.712618+0000 mon.a (mon.0) 1395 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-09T21:23:51.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:50 vm07 bash[28052]: cluster 2026-03-09T21:23:49.712618+0000 mon.a (mon.0) 1395 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-09T21:23:51.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:50 vm07 bash[28052]: cluster 2026-03-09T21:23:49.806432+0000 mgr.y (mgr.24416) 293 : cluster [DBG] pgmap v534: 196 pgs: 196 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T21:23:51.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:50 vm07 bash[28052]: cluster 2026-03-09T21:23:49.806432+0000 mgr.y (mgr.24416) 293 : cluster [DBG] pgmap v534: 196 pgs: 196 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T21:23:51.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:50 vm07 bash[28052]: cluster 2026-03-09T21:23:49.830265+0000 mon.a (mon.0) 1396 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:51.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:50 vm07 bash[28052]: cluster 2026-03-09T21:23:49.830265+0000 mon.a (mon.0) 1396 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:51.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:50 vm07 bash[28052]: cluster 2026-03-09T21:23:50.717627+0000 mon.a (mon.0) 1397 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-09T21:23:51.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:50 vm07 bash[28052]: cluster 2026-03-09T21:23:50.717627+0000 mon.a (mon.0) 1397 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-09T21:23:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:50 vm10 bash[23387]: audit 2026-03-09T21:23:49.708956+0000 mon.a (mon.0) 1394 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:50 vm10 bash[23387]: audit 2026-03-09T21:23:49.708956+0000 mon.a (mon.0) 1394 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T21:23:51.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:50 vm10 bash[23387]: cluster 2026-03-09T21:23:49.712618+0000 mon.a (mon.0) 1395 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-09T21:23:51.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:50 vm10 bash[23387]: cluster 2026-03-09T21:23:49.712618+0000 mon.a (mon.0) 1395 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-09T21:23:51.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:50 vm10 bash[23387]: cluster 2026-03-09T21:23:49.806432+0000 mgr.y (mgr.24416) 293 : cluster [DBG] pgmap v534: 196 pgs: 196 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T21:23:51.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:50 vm10 bash[23387]: cluster 2026-03-09T21:23:49.806432+0000 mgr.y (mgr.24416) 293 : cluster [DBG] pgmap v534: 196 pgs: 196 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T21:23:51.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:50 vm10 bash[23387]: cluster 2026-03-09T21:23:49.830265+0000 mon.a (mon.0) 1396 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:51.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:50 vm10 bash[23387]: cluster 2026-03-09T21:23:49.830265+0000 mon.a (mon.0) 1396 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:51.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:50 vm10 bash[23387]: cluster 2026-03-09T21:23:50.717627+0000 mon.a (mon.0) 1397 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-09T21:23:51.193 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:50 vm10 bash[23387]: cluster 2026-03-09T21:23:50.717627+0000 mon.a (mon.0) 1397 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-09T21:23:51.820 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestCommand::test_ceph_osd_pool_create_utf8 PASSED [ 97%] 2026-03-09T21:23:52.114 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:51 vm07 bash[20771]: audit 2026-03-09T21:23:50.744831+0000 mon.c (mon.2) 134 : audit [DBG] from='client.? 192.168.123.107:0/954619674' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch 2026-03-09T21:23:52.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:51 vm07 bash[20771]: audit 2026-03-09T21:23:50.744831+0000 mon.c (mon.2) 134 : audit [DBG] from='client.? 192.168.123.107:0/954619674' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch 2026-03-09T21:23:52.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:51 vm07 bash[20771]: audit 2026-03-09T21:23:50.750247+0000 mon.c (mon.2) 135 : audit [DBG] from='client.? 192.168.123.107:0/954619674' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T21:23:52.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:51 vm07 bash[20771]: audit 2026-03-09T21:23:50.750247+0000 mon.c (mon.2) 135 : audit [DBG] from='client.? 192.168.123.107:0/954619674' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T21:23:52.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:51 vm07 bash[20771]: audit 2026-03-09T21:23:50.750538+0000 mon.c (mon.2) 136 : audit [DBG] from='client.? 192.168.123.107:0/954619674' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json", "epoch": 1003}]: dispatch 2026-03-09T21:23:52.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:51 vm07 bash[20771]: audit 2026-03-09T21:23:50.750538+0000 mon.c (mon.2) 136 : audit [DBG] from='client.? 192.168.123.107:0/954619674' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json", "epoch": 1003}]: dispatch 2026-03-09T21:23:52.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:51 vm07 bash[20771]: audit 2026-03-09T21:23:50.776757+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.107:0/2693711409' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-09T21:23:52.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:51 vm07 bash[20771]: audit 2026-03-09T21:23:50.776757+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.107:0/2693711409' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-09T21:23:52.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:51 vm07 bash[20771]: audit 2026-03-09T21:23:50.777840+0000 mon.a (mon.0) 1398 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-09T21:23:52.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:51 vm07 bash[20771]: audit 2026-03-09T21:23:50.777840+0000 mon.a (mon.0) 1398 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-09T21:23:52.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:51 vm07 bash[28052]: audit 2026-03-09T21:23:50.744831+0000 mon.c (mon.2) 134 : audit [DBG] from='client.? 192.168.123.107:0/954619674' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch 2026-03-09T21:23:52.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:51 vm07 bash[28052]: audit 2026-03-09T21:23:50.744831+0000 mon.c (mon.2) 134 : audit [DBG] from='client.? 192.168.123.107:0/954619674' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch 2026-03-09T21:23:52.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:51 vm07 bash[28052]: audit 2026-03-09T21:23:50.750247+0000 mon.c (mon.2) 135 : audit [DBG] from='client.? 192.168.123.107:0/954619674' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T21:23:52.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:51 vm07 bash[28052]: audit 2026-03-09T21:23:50.750247+0000 mon.c (mon.2) 135 : audit [DBG] from='client.? 192.168.123.107:0/954619674' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T21:23:52.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:51 vm07 bash[28052]: audit 2026-03-09T21:23:50.750538+0000 mon.c (mon.2) 136 : audit [DBG] from='client.? 192.168.123.107:0/954619674' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json", "epoch": 1003}]: dispatch 2026-03-09T21:23:52.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:51 vm07 bash[28052]: audit 2026-03-09T21:23:50.750538+0000 mon.c (mon.2) 136 : audit [DBG] from='client.? 192.168.123.107:0/954619674' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json", "epoch": 1003}]: dispatch 2026-03-09T21:23:52.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:51 vm07 bash[28052]: audit 2026-03-09T21:23:50.776757+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.107:0/2693711409' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-09T21:23:52.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:51 vm07 bash[28052]: audit 2026-03-09T21:23:50.776757+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.107:0/2693711409' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-09T21:23:52.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:51 vm07 bash[28052]: audit 2026-03-09T21:23:50.777840+0000 mon.a (mon.0) 1398 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-09T21:23:52.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:51 vm07 bash[28052]: audit 2026-03-09T21:23:50.777840+0000 mon.a (mon.0) 1398 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-09T21:23:52.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:51 vm10 bash[23387]: audit 2026-03-09T21:23:50.744831+0000 mon.c (mon.2) 134 : audit [DBG] from='client.? 192.168.123.107:0/954619674' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch 2026-03-09T21:23:52.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:51 vm10 bash[23387]: audit 2026-03-09T21:23:50.744831+0000 mon.c (mon.2) 134 : audit [DBG] from='client.? 192.168.123.107:0/954619674' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch 2026-03-09T21:23:52.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:51 vm10 bash[23387]: audit 2026-03-09T21:23:50.750247+0000 mon.c (mon.2) 135 : audit [DBG] from='client.? 192.168.123.107:0/954619674' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T21:23:52.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:51 vm10 bash[23387]: audit 2026-03-09T21:23:50.750247+0000 mon.c (mon.2) 135 : audit [DBG] from='client.? 192.168.123.107:0/954619674' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T21:23:52.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:51 vm10 bash[23387]: audit 2026-03-09T21:23:50.750538+0000 mon.c (mon.2) 136 : audit [DBG] from='client.? 192.168.123.107:0/954619674' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json", "epoch": 1003}]: dispatch 2026-03-09T21:23:52.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:51 vm10 bash[23387]: audit 2026-03-09T21:23:50.750538+0000 mon.c (mon.2) 136 : audit [DBG] from='client.? 192.168.123.107:0/954619674' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json", "epoch": 1003}]: dispatch 2026-03-09T21:23:52.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:51 vm10 bash[23387]: audit 2026-03-09T21:23:50.776757+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.107:0/2693711409' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-09T21:23:52.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:51 vm10 bash[23387]: audit 2026-03-09T21:23:50.776757+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.107:0/2693711409' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-09T21:23:52.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:51 vm10 bash[23387]: audit 2026-03-09T21:23:50.777840+0000 mon.a (mon.0) 1398 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-09T21:23:52.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:51 vm10 bash[23387]: audit 2026-03-09T21:23:50.777840+0000 mon.a (mon.0) 1398 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-09T21:23:53.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:52 vm07 bash[20771]: audit 2026-03-09T21:23:51.799925+0000 mon.a (mon.0) 1399 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]': finished 2026-03-09T21:23:53.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:52 vm07 bash[20771]: audit 2026-03-09T21:23:51.799925+0000 mon.a (mon.0) 1399 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]': finished 2026-03-09T21:23:53.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:52 vm07 bash[20771]: cluster 2026-03-09T21:23:51.806775+0000 mgr.y (mgr.24416) 294 : cluster [DBG] pgmap v537: 180 pgs: 16 unknown, 164 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:23:53.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:52 vm07 bash[20771]: cluster 2026-03-09T21:23:51.806775+0000 mgr.y (mgr.24416) 294 : cluster [DBG] pgmap v537: 180 pgs: 16 unknown, 164 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:23:53.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:52 vm07 bash[20771]: cluster 2026-03-09T21:23:51.812401+0000 mon.a (mon.0) 1400 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-09T21:23:53.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:52 vm07 bash[20771]: cluster 2026-03-09T21:23:51.812401+0000 mon.a (mon.0) 1400 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-09T21:23:53.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:52 vm07 bash[20771]: cluster 2026-03-09T21:23:52.814717+0000 mon.a (mon.0) 1401 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-09T21:23:53.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:52 vm07 bash[20771]: cluster 2026-03-09T21:23:52.814717+0000 mon.a (mon.0) 1401 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-09T21:23:53.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:52 vm07 bash[28052]: audit 2026-03-09T21:23:51.799925+0000 mon.a (mon.0) 1399 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]': finished 2026-03-09T21:23:53.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:52 vm07 bash[28052]: audit 2026-03-09T21:23:51.799925+0000 mon.a (mon.0) 1399 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]': finished 2026-03-09T21:23:53.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:52 vm07 bash[28052]: cluster 2026-03-09T21:23:51.806775+0000 mgr.y (mgr.24416) 294 : cluster [DBG] pgmap v537: 180 pgs: 16 unknown, 164 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:23:53.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:52 vm07 bash[28052]: cluster 2026-03-09T21:23:51.806775+0000 mgr.y (mgr.24416) 294 : cluster [DBG] pgmap v537: 180 pgs: 16 unknown, 164 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:23:53.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:52 vm07 bash[28052]: cluster 2026-03-09T21:23:51.812401+0000 mon.a (mon.0) 1400 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-09T21:23:53.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:52 vm07 bash[28052]: cluster 2026-03-09T21:23:51.812401+0000 mon.a (mon.0) 1400 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-09T21:23:53.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:52 vm07 bash[28052]: cluster 2026-03-09T21:23:52.814717+0000 mon.a (mon.0) 1401 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-09T21:23:53.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:52 vm07 bash[28052]: cluster 2026-03-09T21:23:52.814717+0000 mon.a (mon.0) 1401 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-09T21:23:53.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:52 vm10 bash[23387]: audit 2026-03-09T21:23:51.799925+0000 mon.a (mon.0) 1399 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]': finished 2026-03-09T21:23:53.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:52 vm10 bash[23387]: audit 2026-03-09T21:23:51.799925+0000 mon.a (mon.0) 1399 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]': finished 2026-03-09T21:23:53.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:52 vm10 bash[23387]: cluster 2026-03-09T21:23:51.806775+0000 mgr.y (mgr.24416) 294 : cluster [DBG] pgmap v537: 180 pgs: 16 unknown, 164 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:23:53.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:52 vm10 bash[23387]: cluster 2026-03-09T21:23:51.806775+0000 mgr.y (mgr.24416) 294 : cluster [DBG] pgmap v537: 180 pgs: 16 unknown, 164 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail 2026-03-09T21:23:53.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:52 vm10 bash[23387]: cluster 2026-03-09T21:23:51.812401+0000 mon.a (mon.0) 1400 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-09T21:23:53.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:52 vm10 bash[23387]: cluster 2026-03-09T21:23:51.812401+0000 mon.a (mon.0) 1400 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-09T21:23:53.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:52 vm10 bash[23387]: cluster 2026-03-09T21:23:52.814717+0000 mon.a (mon.0) 1401 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-09T21:23:53.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:52 vm10 bash[23387]: cluster 2026-03-09T21:23:52.814717+0000 mon.a (mon.0) 1401 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-09T21:23:55.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:54 vm07 bash[20771]: cluster 2026-03-09T21:23:53.807052+0000 mgr.y (mgr.24416) 295 : cluster [DBG] pgmap v539: 212 pgs: 32 unknown, 16 creating+peering, 164 active+clean; 455 KiB data, 487 MiB used, 159 GiB / 160 GiB avail; 750 B/s rd, 0 op/s 2026-03-09T21:23:55.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:54 vm07 bash[20771]: cluster 2026-03-09T21:23:53.807052+0000 mgr.y (mgr.24416) 295 : cluster [DBG] pgmap v539: 212 pgs: 32 unknown, 16 creating+peering, 164 active+clean; 455 KiB data, 487 MiB used, 159 GiB / 160 GiB avail; 750 B/s rd, 0 op/s 2026-03-09T21:23:55.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:54 vm07 bash[20771]: cluster 2026-03-09T21:23:53.825601+0000 mon.a (mon.0) 1402 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-09T21:23:55.115 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:54 vm07 bash[20771]: cluster 2026-03-09T21:23:53.825601+0000 mon.a (mon.0) 1402 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-09T21:23:55.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:54 vm07 bash[28052]: cluster 2026-03-09T21:23:53.807052+0000 mgr.y (mgr.24416) 295 : cluster [DBG] pgmap v539: 212 pgs: 32 unknown, 16 creating+peering, 164 active+clean; 455 KiB data, 487 MiB used, 159 GiB / 160 GiB avail; 750 B/s rd, 0 op/s 2026-03-09T21:23:55.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:54 vm07 bash[28052]: cluster 2026-03-09T21:23:53.807052+0000 mgr.y (mgr.24416) 295 : cluster [DBG] pgmap v539: 212 pgs: 32 unknown, 16 creating+peering, 164 active+clean; 455 KiB data, 487 MiB used, 159 GiB / 160 GiB avail; 750 B/s rd, 0 op/s 2026-03-09T21:23:55.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:54 vm07 bash[28052]: cluster 2026-03-09T21:23:53.825601+0000 mon.a (mon.0) 1402 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-09T21:23:55.115 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:54 vm07 bash[28052]: cluster 2026-03-09T21:23:53.825601+0000 mon.a (mon.0) 1402 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-09T21:23:55.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:54 vm10 bash[23387]: cluster 2026-03-09T21:23:53.807052+0000 mgr.y (mgr.24416) 295 : cluster [DBG] pgmap v539: 212 pgs: 32 unknown, 16 creating+peering, 164 active+clean; 455 KiB data, 487 MiB used, 159 GiB / 160 GiB avail; 750 B/s rd, 0 op/s 2026-03-09T21:23:55.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:54 vm10 bash[23387]: cluster 2026-03-09T21:23:53.807052+0000 mgr.y (mgr.24416) 295 : cluster [DBG] pgmap v539: 212 pgs: 32 unknown, 16 creating+peering, 164 active+clean; 455 KiB data, 487 MiB used, 159 GiB / 160 GiB avail; 750 B/s rd, 0 op/s 2026-03-09T21:23:55.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:54 vm10 bash[23387]: cluster 2026-03-09T21:23:53.825601+0000 mon.a (mon.0) 1402 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-09T21:23:55.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:54 vm10 bash[23387]: cluster 2026-03-09T21:23:53.825601+0000 mon.a (mon.0) 1402 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-09T21:23:56.008 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestWatchNotify::test PASSED [ 98%] 2026-03-09T21:23:56.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:55 vm10 bash[23387]: cluster 2026-03-09T21:23:54.831906+0000 mon.a (mon.0) 1403 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-09T21:23:56.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:55 vm10 bash[23387]: cluster 2026-03-09T21:23:54.831906+0000 mon.a (mon.0) 1403 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-09T21:23:56.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:55 vm10 bash[23387]: cluster 2026-03-09T21:23:54.832221+0000 mon.a (mon.0) 1404 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:56.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:55 vm10 bash[23387]: cluster 2026-03-09T21:23:54.832221+0000 mon.a (mon.0) 1404 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:56.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:55 vm07 bash[20771]: cluster 2026-03-09T21:23:54.831906+0000 mon.a (mon.0) 1403 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-09T21:23:56.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:55 vm07 bash[20771]: cluster 2026-03-09T21:23:54.831906+0000 mon.a (mon.0) 1403 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-09T21:23:56.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:55 vm07 bash[20771]: cluster 2026-03-09T21:23:54.832221+0000 mon.a (mon.0) 1404 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:56.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:55 vm07 bash[20771]: cluster 2026-03-09T21:23:54.832221+0000 mon.a (mon.0) 1404 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:55 vm07 bash[28052]: cluster 2026-03-09T21:23:54.831906+0000 mon.a (mon.0) 1403 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-09T21:23:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:55 vm07 bash[28052]: cluster 2026-03-09T21:23:54.831906+0000 mon.a (mon.0) 1403 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-09T21:23:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:55 vm07 bash[28052]: cluster 2026-03-09T21:23:54.832221+0000 mon.a (mon.0) 1404 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:56.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:55 vm07 bash[28052]: cluster 2026-03-09T21:23:54.832221+0000 mon.a (mon.0) 1404 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:23:56.916 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:23:56 vm10 bash[48970]: debug there is no tcmu-runner data available 2026-03-09T21:23:57.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:56 vm10 bash[23387]: cluster 2026-03-09T21:23:55.807311+0000 mgr.y (mgr.24416) 296 : cluster [DBG] pgmap v542: 212 pgs: 32 unknown, 16 creating+peering, 164 active+clean; 455 KiB data, 487 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:57.245 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:56 vm10 bash[23387]: cluster 2026-03-09T21:23:55.807311+0000 mgr.y (mgr.24416) 296 : cluster [DBG] pgmap v542: 212 pgs: 32 unknown, 16 creating+peering, 164 active+clean; 455 KiB data, 487 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:57.245 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:56 vm10 bash[23387]: cluster 2026-03-09T21:23:56.008248+0000 mon.a (mon.0) 1405 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-09T21:23:57.245 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:56 vm10 bash[23387]: cluster 2026-03-09T21:23:56.008248+0000 mon.a (mon.0) 1405 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-09T21:23:57.364 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:56 vm07 bash[20771]: cluster 2026-03-09T21:23:55.807311+0000 mgr.y (mgr.24416) 296 : cluster [DBG] pgmap v542: 212 pgs: 32 unknown, 16 creating+peering, 164 active+clean; 455 KiB data, 487 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:57.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:56 vm07 bash[20771]: cluster 2026-03-09T21:23:55.807311+0000 mgr.y (mgr.24416) 296 : cluster [DBG] pgmap v542: 212 pgs: 32 unknown, 16 creating+peering, 164 active+clean; 455 KiB data, 487 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:57.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:56 vm07 bash[20771]: cluster 2026-03-09T21:23:56.008248+0000 mon.a (mon.0) 1405 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-09T21:23:57.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:56 vm07 bash[20771]: cluster 2026-03-09T21:23:56.008248+0000 mon.a (mon.0) 1405 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-09T21:23:57.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:56 vm07 bash[28052]: cluster 2026-03-09T21:23:55.807311+0000 mgr.y (mgr.24416) 296 : cluster [DBG] pgmap v542: 212 pgs: 32 unknown, 16 creating+peering, 164 active+clean; 455 KiB data, 487 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:57.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:56 vm07 bash[28052]: cluster 2026-03-09T21:23:55.807311+0000 mgr.y (mgr.24416) 296 : cluster [DBG] pgmap v542: 212 pgs: 32 unknown, 16 creating+peering, 164 active+clean; 455 KiB data, 487 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:57.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:56 vm07 bash[28052]: cluster 2026-03-09T21:23:56.008248+0000 mon.a (mon.0) 1405 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-09T21:23:57.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:56 vm07 bash[28052]: cluster 2026-03-09T21:23:56.008248+0000 mon.a (mon.0) 1405 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-09T21:23:58.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:57 vm10 bash[23387]: audit 2026-03-09T21:23:56.527760+0000 mgr.y (mgr.24416) 297 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:58.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:57 vm10 bash[23387]: audit 2026-03-09T21:23:56.527760+0000 mgr.y (mgr.24416) 297 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:58.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:57 vm10 bash[23387]: cluster 2026-03-09T21:23:57.001907+0000 mon.a (mon.0) 1406 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-09T21:23:58.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:57 vm10 bash[23387]: cluster 2026-03-09T21:23:57.001907+0000 mon.a (mon.0) 1406 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-09T21:23:58.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:57 vm10 bash[23387]: audit 2026-03-09T21:23:57.286037+0000 mon.c (mon.2) 137 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:23:58.192 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:57 vm10 bash[23387]: audit 2026-03-09T21:23:57.286037+0000 mon.c (mon.2) 137 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:23:58.364 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:57 vm07 bash[20771]: audit 2026-03-09T21:23:56.527760+0000 mgr.y (mgr.24416) 297 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:58.364 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:57 vm07 bash[20771]: audit 2026-03-09T21:23:56.527760+0000 mgr.y (mgr.24416) 297 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:58.364 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:57 vm07 bash[20771]: cluster 2026-03-09T21:23:57.001907+0000 mon.a (mon.0) 1406 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-09T21:23:58.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:57 vm07 bash[20771]: cluster 2026-03-09T21:23:57.001907+0000 mon.a (mon.0) 1406 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-09T21:23:58.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:57 vm07 bash[20771]: audit 2026-03-09T21:23:57.286037+0000 mon.c (mon.2) 137 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:23:58.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:57 vm07 bash[20771]: audit 2026-03-09T21:23:57.286037+0000 mon.c (mon.2) 137 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:23:58.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:57 vm07 bash[28052]: audit 2026-03-09T21:23:56.527760+0000 mgr.y (mgr.24416) 297 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:58.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:57 vm07 bash[28052]: audit 2026-03-09T21:23:56.527760+0000 mgr.y (mgr.24416) 297 : audit [DBG] from='client.24400 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T21:23:58.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:57 vm07 bash[28052]: cluster 2026-03-09T21:23:57.001907+0000 mon.a (mon.0) 1406 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-09T21:23:58.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:57 vm07 bash[28052]: cluster 2026-03-09T21:23:57.001907+0000 mon.a (mon.0) 1406 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-09T21:23:58.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:57 vm07 bash[28052]: audit 2026-03-09T21:23:57.286037+0000 mon.c (mon.2) 137 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:23:58.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:57 vm07 bash[28052]: audit 2026-03-09T21:23:57.286037+0000 mon.c (mon.2) 137 : audit [DBG] from='mgr.24416 192.168.123.107:0/1190474614' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:23:59.046 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:23:58 vm07 bash[21040]: ::ffff:192.168.123.110 - - [09/Mar/2026:21:23:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T21:23:59.364 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:59 vm07 bash[20771]: cluster 2026-03-09T21:23:57.808060+0000 mgr.y (mgr.24416) 298 : cluster [DBG] pgmap v545: 212 pgs: 32 creating+peering, 180 active+clean; 455 KiB data, 487 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:59.364 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:59 vm07 bash[20771]: cluster 2026-03-09T21:23:57.808060+0000 mgr.y (mgr.24416) 298 : cluster [DBG] pgmap v545: 212 pgs: 32 creating+peering, 180 active+clean; 455 KiB data, 487 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:59.364 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:59 vm07 bash[20771]: cluster 2026-03-09T21:23:58.017867+0000 mon.a (mon.0) 1407 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-09T21:23:59.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:23:59 vm07 bash[20771]: cluster 2026-03-09T21:23:58.017867+0000 mon.a (mon.0) 1407 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-09T21:23:59.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:59 vm07 bash[28052]: cluster 2026-03-09T21:23:57.808060+0000 mgr.y (mgr.24416) 298 : cluster [DBG] pgmap v545: 212 pgs: 32 creating+peering, 180 active+clean; 455 KiB data, 487 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:59.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:59 vm07 bash[28052]: cluster 2026-03-09T21:23:57.808060+0000 mgr.y (mgr.24416) 298 : cluster [DBG] pgmap v545: 212 pgs: 32 creating+peering, 180 active+clean; 455 KiB data, 487 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:59.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:59 vm07 bash[28052]: cluster 2026-03-09T21:23:58.017867+0000 mon.a (mon.0) 1407 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-09T21:23:59.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:23:59 vm07 bash[28052]: cluster 2026-03-09T21:23:58.017867+0000 mon.a (mon.0) 1407 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-09T21:23:59.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:59 vm10 bash[23387]: cluster 2026-03-09T21:23:57.808060+0000 mgr.y (mgr.24416) 298 : cluster [DBG] pgmap v545: 212 pgs: 32 creating+peering, 180 active+clean; 455 KiB data, 487 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:59.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:59 vm10 bash[23387]: cluster 2026-03-09T21:23:57.808060+0000 mgr.y (mgr.24416) 298 : cluster [DBG] pgmap v545: 212 pgs: 32 creating+peering, 180 active+clean; 455 KiB data, 487 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:23:59.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:59 vm10 bash[23387]: cluster 2026-03-09T21:23:58.017867+0000 mon.a (mon.0) 1407 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-09T21:23:59.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:23:59 vm10 bash[23387]: cluster 2026-03-09T21:23:58.017867+0000 mon.a (mon.0) 1407 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-09T21:24:00.086 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestWatchNotify::test_aio_notify PASSED [100%] 2026-03-09T21:24:00.086 INFO:tasks.workunit.client.0.vm07.stdout: 2026-03-09T21:24:00.086 INFO:tasks.workunit.client.0.vm07.stdout:=============================== warnings summary =============================== 2026-03-09T21:24:00.087 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py:210 2026-03-09T21:24:00.087 INFO:tasks.workunit.client.0.vm07.stdout: /home/ubuntu/cephtest/clone.client.0/src/test/pybind/test_rados.py:210: DeprecationWarning: invalid escape sequence '\-' 2026-03-09T21:24:00.087 INFO:tasks.workunit.client.0.vm07.stdout: assert re.match('[0-9a-f\-]{36}', fsid, re.I) 2026-03-09T21:24:00.087 INFO:tasks.workunit.client.0.vm07.stdout: 2026-03-09T21:24:00.087 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py:960 2026-03-09T21:24:00.087 INFO:tasks.workunit.client.0.vm07.stdout: /home/ubuntu/cephtest/clone.client.0/src/test/pybind/test_rados.py:960: PytestUnknownMarkWarning: Unknown pytest.mark.wait - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html 2026-03-09T21:24:00.087 INFO:tasks.workunit.client.0.vm07.stdout: @pytest.mark.wait 2026-03-09T21:24:00.087 INFO:tasks.workunit.client.0.vm07.stdout: 2026-03-09T21:24:00.087 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py:996 2026-03-09T21:24:00.087 INFO:tasks.workunit.client.0.vm07.stdout: /home/ubuntu/cephtest/clone.client.0/src/test/pybind/test_rados.py:996: PytestUnknownMarkWarning: Unknown pytest.mark.wait - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html 2026-03-09T21:24:00.087 INFO:tasks.workunit.client.0.vm07.stdout: @pytest.mark.wait 2026-03-09T21:24:00.087 INFO:tasks.workunit.client.0.vm07.stdout: 2026-03-09T21:24:00.087 INFO:tasks.workunit.client.0.vm07.stdout:../../../clone.client.0/src/test/pybind/test_rados.py:1024 2026-03-09T21:24:00.087 INFO:tasks.workunit.client.0.vm07.stdout: /home/ubuntu/cephtest/clone.client.0/src/test/pybind/test_rados.py:1024: PytestUnknownMarkWarning: Unknown pytest.mark.wait - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html 2026-03-09T21:24:00.087 INFO:tasks.workunit.client.0.vm07.stdout: @pytest.mark.wait 2026-03-09T21:24:00.087 INFO:tasks.workunit.client.0.vm07.stdout: 2026-03-09T21:24:00.087 INFO:tasks.workunit.client.0.vm07.stdout::210 2026-03-09T21:24:00.087 INFO:tasks.workunit.client.0.vm07.stdout::210 2026-03-09T21:24:00.087 INFO:tasks.workunit.client.0.vm07.stdout::210 2026-03-09T21:24:00.087 INFO:tasks.workunit.client.0.vm07.stdout::210 2026-03-09T21:24:00.087 INFO:tasks.workunit.client.0.vm07.stdout::210 2026-03-09T21:24:00.087 INFO:tasks.workunit.client.0.vm07.stdout::210 2026-03-09T21:24:00.087 INFO:tasks.workunit.client.0.vm07.stdout::210 2026-03-09T21:24:00.087 INFO:tasks.workunit.client.0.vm07.stdout::210 2026-03-09T21:24:00.087 INFO:tasks.workunit.client.0.vm07.stdout::210 2026-03-09T21:24:00.087 INFO:tasks.workunit.client.0.vm07.stdout: :210: DeprecationWarning: invalid escape sequence '\-' 2026-03-09T21:24:00.087 INFO:tasks.workunit.client.0.vm07.stdout: 2026-03-09T21:24:00.087 INFO:tasks.workunit.client.0.vm07.stdout:-- Docs: https://docs.pytest.org/en/stable/warnings.html 2026-03-09T21:24:00.087 INFO:tasks.workunit.client.0.vm07.stdout:================= 91 passed, 13 warnings in 335.76s (0:05:35) ================== 2026-03-09T21:24:00.114 INFO:tasks.workunit.client.0.vm07.stderr:+ exit 0 2026-03-09T21:24:00.114 INFO:teuthology.orchestra.run:Running command with timeout 3600 2026-03-09T21:24:00.114 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp 2026-03-09T21:24:00.121 INFO:tasks.workunit:Stopping ['rados/test_python.sh'] on client.0... 2026-03-09T21:24:00.121 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0 2026-03-09T21:24:00.364 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:24:00 vm07 bash[20771]: cluster 2026-03-09T21:23:59.075113+0000 mon.a (mon.0) 1408 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-09T21:24:00.364 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:24:00 vm07 bash[20771]: cluster 2026-03-09T21:23:59.075113+0000 mon.a (mon.0) 1408 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-09T21:24:00.364 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:24:00 vm07 bash[20771]: cluster 2026-03-09T21:23:59.808363+0000 mgr.y (mgr.24416) 299 : cluster [DBG] pgmap v548: 212 pgs: 32 creating+peering, 180 active+clean; 455 KiB data, 487 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:24:00.364 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:24:00 vm07 bash[20771]: cluster 2026-03-09T21:23:59.808363+0000 mgr.y (mgr.24416) 299 : cluster [DBG] pgmap v548: 212 pgs: 32 creating+peering, 180 active+clean; 455 KiB data, 487 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:24:00.364 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:24:00 vm07 bash[28052]: cluster 2026-03-09T21:23:59.075113+0000 mon.a (mon.0) 1408 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-09T21:24:00.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:24:00 vm07 bash[28052]: cluster 2026-03-09T21:23:59.075113+0000 mon.a (mon.0) 1408 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-09T21:24:00.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:24:00 vm07 bash[28052]: cluster 2026-03-09T21:23:59.808363+0000 mgr.y (mgr.24416) 299 : cluster [DBG] pgmap v548: 212 pgs: 32 creating+peering, 180 active+clean; 455 KiB data, 487 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:24:00.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:24:00 vm07 bash[28052]: cluster 2026-03-09T21:23:59.808363+0000 mgr.y (mgr.24416) 299 : cluster [DBG] pgmap v548: 212 pgs: 32 creating+peering, 180 active+clean; 455 KiB data, 487 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:24:00.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:24:00 vm10 bash[23387]: cluster 2026-03-09T21:23:59.075113+0000 mon.a (mon.0) 1408 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-09T21:24:00.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:24:00 vm10 bash[23387]: cluster 2026-03-09T21:23:59.075113+0000 mon.a (mon.0) 1408 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-09T21:24:00.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:24:00 vm10 bash[23387]: cluster 2026-03-09T21:23:59.808363+0000 mgr.y (mgr.24416) 299 : cluster [DBG] pgmap v548: 212 pgs: 32 creating+peering, 180 active+clean; 455 KiB data, 487 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:24:00.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:24:00 vm10 bash[23387]: cluster 2026-03-09T21:23:59.808363+0000 mgr.y (mgr.24416) 299 : cluster [DBG] pgmap v548: 212 pgs: 32 creating+peering, 180 active+clean; 455 KiB data, 487 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T21:24:00.587 DEBUG:teuthology.parallel:result is None 2026-03-09T21:24:00.587 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0 2026-03-09T21:24:00.596 INFO:tasks.workunit:Deleted dir /home/ubuntu/cephtest/mnt.0/client.0 2026-03-09T21:24:00.596 DEBUG:teuthology.orchestra.run.vm07:> rmdir -- /home/ubuntu/cephtest/mnt.0 2026-03-09T21:24:00.640 INFO:tasks.workunit:Deleted artificial mount point /home/ubuntu/cephtest/mnt.0/client.0 2026-03-09T21:24:00.640 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-09T21:24:00.643 INFO:tasks.cephadm:Teardown begin 2026-03-09T21:24:00.643 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T21:24:00.692 DEBUG:teuthology.orchestra.run.vm10:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T21:24:00.702 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-09T21:24:00.702 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 -- ceph mgr module disable cephadm 2026-03-09T21:24:01.364 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:24:01 vm07 bash[20771]: cluster 2026-03-09T21:24:00.077194+0000 mon.a (mon.0) 1409 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-09T21:24:01.365 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:24:01 vm07 bash[20771]: cluster 2026-03-09T21:24:00.077194+0000 mon.a (mon.0) 1409 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-09T21:24:01.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:24:01 vm07 bash[28052]: cluster 2026-03-09T21:24:00.077194+0000 mon.a (mon.0) 1409 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-09T21:24:01.365 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:24:01 vm07 bash[28052]: cluster 2026-03-09T21:24:00.077194+0000 mon.a (mon.0) 1409 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-09T21:24:01.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:24:01 vm10 bash[23387]: cluster 2026-03-09T21:24:00.077194+0000 mon.a (mon.0) 1409 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-09T21:24:01.442 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:24:01 vm10 bash[23387]: cluster 2026-03-09T21:24:00.077194+0000 mon.a (mon.0) 1409 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-09T21:24:02.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:24:02 vm07 bash[20771]: cluster 2026-03-09T21:24:01.808627+0000 mgr.y (mgr.24416) 300 : cluster [DBG] pgmap v550: 180 pgs: 180 active+clean; 455 KiB data, 487 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T21:24:02.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:24:02 vm07 bash[20771]: cluster 2026-03-09T21:24:01.808627+0000 mgr.y (mgr.24416) 300 : cluster [DBG] pgmap v550: 180 pgs: 180 active+clean; 455 KiB data, 487 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T21:24:02.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:24:02 vm07 bash[28052]: cluster 2026-03-09T21:24:01.808627+0000 mgr.y (mgr.24416) 300 : cluster [DBG] pgmap v550: 180 pgs: 180 active+clean; 455 KiB data, 487 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T21:24:02.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:24:02 vm07 bash[28052]: cluster 2026-03-09T21:24:01.808627+0000 mgr.y (mgr.24416) 300 : cluster [DBG] pgmap v550: 180 pgs: 180 active+clean; 455 KiB data, 487 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T21:24:02.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:24:02 vm10 bash[23387]: cluster 2026-03-09T21:24:01.808627+0000 mgr.y (mgr.24416) 300 : cluster [DBG] pgmap v550: 180 pgs: 180 active+clean; 455 KiB data, 487 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T21:24:02.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:24:02 vm10 bash[23387]: cluster 2026-03-09T21:24:01.808627+0000 mgr.y (mgr.24416) 300 : cluster [DBG] pgmap v550: 180 pgs: 180 active+clean; 455 KiB data, 487 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T21:24:03.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:24:03 vm07 bash[20771]: cluster 2026-03-09T21:24:02.089620+0000 mon.a (mon.0) 1410 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:24:03.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:24:03 vm07 bash[20771]: cluster 2026-03-09T21:24:02.089620+0000 mon.a (mon.0) 1410 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:24:03.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:24:03 vm07 bash[28052]: cluster 2026-03-09T21:24:02.089620+0000 mon.a (mon.0) 1410 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:24:03.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:24:03 vm07 bash[28052]: cluster 2026-03-09T21:24:02.089620+0000 mon.a (mon.0) 1410 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:24:03.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:24:03 vm10 bash[23387]: cluster 2026-03-09T21:24:02.089620+0000 mon.a (mon.0) 1410 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:24:03.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:24:03 vm10 bash[23387]: cluster 2026-03-09T21:24:02.089620+0000 mon.a (mon.0) 1410 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T21:24:04.614 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:24:04 vm07 bash[20771]: cluster 2026-03-09T21:24:03.809168+0000 mgr.y (mgr.24416) 301 : cluster [DBG] pgmap v551: 180 pgs: 180 active+clean; 455 KiB data, 488 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:24:04.615 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:24:04 vm07 bash[20771]: cluster 2026-03-09T21:24:03.809168+0000 mgr.y (mgr.24416) 301 : cluster [DBG] pgmap v551: 180 pgs: 180 active+clean; 455 KiB data, 488 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:24:04.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:24:04 vm07 bash[28052]: cluster 2026-03-09T21:24:03.809168+0000 mgr.y (mgr.24416) 301 : cluster [DBG] pgmap v551: 180 pgs: 180 active+clean; 455 KiB data, 488 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:24:04.615 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:24:04 vm07 bash[28052]: cluster 2026-03-09T21:24:03.809168+0000 mgr.y (mgr.24416) 301 : cluster [DBG] pgmap v551: 180 pgs: 180 active+clean; 455 KiB data, 488 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:24:04.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:24:04 vm10 bash[23387]: cluster 2026-03-09T21:24:03.809168+0000 mgr.y (mgr.24416) 301 : cluster [DBG] pgmap v551: 180 pgs: 180 active+clean; 455 KiB data, 488 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:24:04.692 INFO:journalctl@ceph.mon.b.vm10.stdout:Mar 09 21:24:04 vm10 bash[23387]: cluster 2026-03-09T21:24:03.809168+0000 mgr.y (mgr.24416) 301 : cluster [DBG] pgmap v551: 180 pgs: 180 active+clean; 455 KiB data, 488 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T21:24:05.367 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/mon.c/config 2026-03-09T21:24:05.558 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T21:24:05.553+0000 7f8e2302f640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-09T21:24:05.558 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T21:24:05.553+0000 7f8e2302f640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-09T21:24:05.558 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T21:24:05.553+0000 7f8e2302f640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-09T21:24:05.558 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T21:24:05.553+0000 7f8e2302f640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-09T21:24:05.558 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T21:24:05.553+0000 7f8e2302f640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-09T21:24:05.558 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T21:24:05.553+0000 7f8e2302f640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-09T21:24:05.558 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T21:24:05.553+0000 7f8e2302f640 -1 monclient: keyring not found 2026-03-09T21:24:05.559 INFO:teuthology.orchestra.run.vm07.stderr:[errno 21] error connecting to the cluster 2026-03-09T21:24:05.615 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T21:24:05.615 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-09T21:24:05.615 DEBUG:teuthology.orchestra.run.vm07:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-09T21:24:05.618 DEBUG:teuthology.orchestra.run.vm10:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-09T21:24:05.621 INFO:tasks.cephadm:Stopping all daemons... 2026-03-09T21:24:05.621 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-09T21:24:05.621 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@mon.a 2026-03-09T21:24:05.864 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:24:05 vm07 systemd[1]: Stopping Ceph mon.a for 22c897f4-1bfc-11f1-adaa-13127443f8b3... 2026-03-09T21:24:05.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:24:05 vm07 bash[20771]: debug 2026-03-09T21:24:05.701+0000 7efedaed2640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T21:24:05.865 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 21:24:05 vm07 bash[20771]: debug 2026-03-09T21:24:05.701+0000 7efedaed2640 -1 mon.a@0(leader) e3 *** Got Signal Terminated *** 2026-03-09T21:24:05.998 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@mon.a.service' 2026-03-09T21:24:06.012 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T21:24:06.012 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-09T21:24:06.012 INFO:tasks.cephadm.mon.b:Stopping mon.c... 2026-03-09T21:24:06.012 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@mon.c 2026-03-09T21:24:06.189 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:24:06 vm07 systemd[1]: Stopping Ceph mon.c for 22c897f4-1bfc-11f1-adaa-13127443f8b3... 2026-03-09T21:24:06.189 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:24:06 vm07 bash[28052]: debug 2026-03-09T21:24:06.097+0000 7f2d2b60f640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.c -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T21:24:06.189 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:24:06 vm07 bash[28052]: debug 2026-03-09T21:24:06.097+0000 7f2d2b60f640 -1 mon.c@2(peon) e3 *** Got Signal Terminated *** 2026-03-09T21:24:06.189 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 21:24:06 vm07 bash[60829]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3-mon-c 2026-03-09T21:24:06.190 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:24:06 vm07 bash[21040]: [09/Mar/2026:21:24:06] ENGINE Bus STOPPING 2026-03-09T21:24:06.190 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:24:06 vm07 bash[21040]: [09/Mar/2026:21:24:06] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T21:24:06.190 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:24:06 vm07 bash[21040]: [09/Mar/2026:21:24:06] ENGINE Bus STOPPED 2026-03-09T21:24:06.190 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:24:06 vm07 bash[21040]: [09/Mar/2026:21:24:06] ENGINE Bus STARTING 2026-03-09T21:24:06.222 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@mon.c.service' 2026-03-09T21:24:06.236 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T21:24:06.236 INFO:tasks.cephadm.mon.b:Stopped mon.c 2026-03-09T21:24:06.236 INFO:tasks.cephadm.mon.b:Stopping mon.b... 2026-03-09T21:24:06.236 DEBUG:teuthology.orchestra.run.vm10:> sudo systemctl stop ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@mon.b 2026-03-09T21:24:06.423 DEBUG:teuthology.orchestra.run.vm10:> sudo pkill -f 'journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@mon.b.service' 2026-03-09T21:24:06.438 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T21:24:06.438 INFO:tasks.cephadm.mon.b:Stopped mon.b 2026-03-09T21:24:06.438 INFO:tasks.cephadm.mgr.y:Stopping mgr.y... 2026-03-09T21:24:06.438 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@mgr.y 2026-03-09T21:24:06.497 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:24:06 vm07 bash[21040]: [09/Mar/2026:21:24:06] ENGINE Serving on http://:::9283 2026-03-09T21:24:06.497 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:24:06 vm07 bash[21040]: [09/Mar/2026:21:24:06] ENGINE Bus STARTED 2026-03-09T21:24:06.497 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 21:24:06 vm07 systemd[1]: Stopping Ceph mgr.y for 22c897f4-1bfc-11f1-adaa-13127443f8b3... 2026-03-09T21:24:06.617 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@mgr.y.service' 2026-03-09T21:24:06.629 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T21:24:06.629 INFO:tasks.cephadm.mgr.y:Stopped mgr.y 2026-03-09T21:24:06.629 INFO:tasks.cephadm.mgr.x:Stopping mgr.x... 2026-03-09T21:24:06.629 DEBUG:teuthology.orchestra.run.vm10:> sudo systemctl stop ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@mgr.x 2026-03-09T21:24:06.780 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:24:06 vm10 systemd[1]: Stopping Ceph mgr.x for 22c897f4-1bfc-11f1-adaa-13127443f8b3... 2026-03-09T21:24:06.780 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:24:06 vm10 bash[52759]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3-mgr-x 2026-03-09T21:24:06.780 INFO:journalctl@ceph.mgr.x.vm10.stdout:Mar 09 21:24:06 vm10 systemd[1]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@mgr.x.service: Main process exited, code=exited, status=143/n/a 2026-03-09T21:24:06.782 DEBUG:teuthology.orchestra.run.vm10:> sudo pkill -f 'journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@mgr.x.service' 2026-03-09T21:24:06.794 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T21:24:06.794 INFO:tasks.cephadm.mgr.x:Stopped mgr.x 2026-03-09T21:24:06.795 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-09T21:24:06.795 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@osd.0 2026-03-09T21:24:07.114 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:24:06 vm07 systemd[1]: Stopping Ceph osd.0 for 22c897f4-1bfc-11f1-adaa-13127443f8b3... 2026-03-09T21:24:07.115 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:24:06 vm07 bash[30944]: debug 2026-03-09T21:24:06.837+0000 7f229fe2f640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T21:24:07.115 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:24:06 vm07 bash[30944]: debug 2026-03-09T21:24:06.837+0000 7f229fe2f640 -1 osd.0 393 *** Got signal Terminated *** 2026-03-09T21:24:07.115 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:24:06 vm07 bash[30944]: debug 2026-03-09T21:24:06.837+0000 7f229fe2f640 -1 osd.0 393 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T21:24:12.194 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:24:11 vm07 bash[61012]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3-osd-0 2026-03-09T21:24:12.306 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@osd.0.service' 2026-03-09T21:24:12.318 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T21:24:12.318 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-09T21:24:12.318 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-09T21:24:12.318 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@osd.1 2026-03-09T21:24:12.442 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:24:12 vm10 bash[51847]: ts=2026-03-09T21:24:12.080Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nvmeof msg="Unable to refresh target groups" err="Get \"http://192.168.123.107:8765/sd/prometheus/sd-config?service=nvmeof\": dial tcp 192.168.123.107:8765: connect: connection refused" 2026-03-09T21:24:12.442 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:24:12 vm10 bash[51847]: ts=2026-03-09T21:24:12.081Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph-exporter msg="Unable to refresh target groups" err="Get \"http://192.168.123.107:8765/sd/prometheus/sd-config?service=ceph-exporter\": dial tcp 192.168.123.107:8765: connect: connection refused" 2026-03-09T21:24:12.442 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:24:12 vm10 bash[51847]: ts=2026-03-09T21:24:12.081Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=node msg="Unable to refresh target groups" err="Get \"http://192.168.123.107:8765/sd/prometheus/sd-config?service=node-exporter\": dial tcp 192.168.123.107:8765: connect: connection refused" 2026-03-09T21:24:12.442 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:24:12 vm10 bash[51847]: ts=2026-03-09T21:24:12.084Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph msg="Unable to refresh target groups" err="Get \"http://192.168.123.107:8765/sd/prometheus/sd-config?service=mgr-prometheus\": dial tcp 192.168.123.107:8765: connect: connection refused" 2026-03-09T21:24:12.442 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:24:12 vm10 bash[51847]: ts=2026-03-09T21:24:12.088Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nfs msg="Unable to refresh target groups" err="Get \"http://192.168.123.107:8765/sd/prometheus/sd-config?service=nfs\": dial tcp 192.168.123.107:8765: connect: connection refused" 2026-03-09T21:24:12.442 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:24:12 vm10 bash[51847]: ts=2026-03-09T21:24:12.088Z caller=refresh.go:90 level=error component="discovery manager notify" discovery=http config=config-0 msg="Unable to refresh target groups" err="Get \"http://192.168.123.107:8765/sd/prometheus/sd-config?service=alertmanager\": dial tcp 192.168.123.107:8765: connect: connection refused" 2026-03-09T21:24:12.615 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:24:12 vm07 systemd[1]: Stopping Ceph osd.1 for 22c897f4-1bfc-11f1-adaa-13127443f8b3... 2026-03-09T21:24:12.615 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:24:12 vm07 bash[36993]: debug 2026-03-09T21:24:12.405+0000 7f6fc3fc5640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T21:24:12.615 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:24:12 vm07 bash[36993]: debug 2026-03-09T21:24:12.405+0000 7f6fc3fc5640 -1 osd.1 393 *** Got signal Terminated *** 2026-03-09T21:24:12.615 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:24:12 vm07 bash[36993]: debug 2026-03-09T21:24:12.405+0000 7f6fc3fc5640 -1 osd.1 393 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T21:24:17.709 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:24:17 vm07 bash[61192]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3-osd-1 2026-03-09T21:24:17.709 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:24:17 vm07 bash[61259]: Error response from daemon: No such container: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3-osd-1 2026-03-09T21:24:17.954 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@osd.1.service' 2026-03-09T21:24:17.960 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:24:17 vm07 systemd[1]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@osd.1.service: Deactivated successfully. 2026-03-09T21:24:17.960 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:24:17 vm07 systemd[1]: Stopped Ceph osd.1 for 22c897f4-1bfc-11f1-adaa-13127443f8b3. 2026-03-09T21:24:17.965 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T21:24:17.965 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-09T21:24:17.965 INFO:tasks.cephadm.osd.2:Stopping osd.2... 2026-03-09T21:24:17.965 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@osd.2 2026-03-09T21:24:18.365 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 21:24:18 vm07 systemd[1]: Stopping Ceph osd.2 for 22c897f4-1bfc-11f1-adaa-13127443f8b3... 2026-03-09T21:24:18.365 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 21:24:18 vm07 bash[42797]: debug 2026-03-09T21:24:18.053+0000 7f290a6b9640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T21:24:18.365 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 21:24:18 vm07 bash[42797]: debug 2026-03-09T21:24:18.053+0000 7f290a6b9640 -1 osd.2 393 *** Got signal Terminated *** 2026-03-09T21:24:18.365 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 21:24:18 vm07 bash[42797]: debug 2026-03-09T21:24:18.053+0000 7f290a6b9640 -1 osd.2 393 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T21:24:23.364 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 21:24:23 vm07 bash[61376]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3-osd-2 2026-03-09T21:24:23.435 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@osd.2.service' 2026-03-09T21:24:23.447 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T21:24:23.447 INFO:tasks.cephadm.osd.2:Stopped osd.2 2026-03-09T21:24:23.447 INFO:tasks.cephadm.osd.3:Stopping osd.3... 2026-03-09T21:24:23.447 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@osd.3 2026-03-09T21:24:23.865 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 21:24:23 vm07 systemd[1]: Stopping Ceph osd.3 for 22c897f4-1bfc-11f1-adaa-13127443f8b3... 2026-03-09T21:24:23.865 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 21:24:23 vm07 bash[48824]: debug 2026-03-09T21:24:23.529+0000 7f76629c8640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.3 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T21:24:23.865 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 21:24:23 vm07 bash[48824]: debug 2026-03-09T21:24:23.529+0000 7f76629c8640 -1 osd.3 393 *** Got signal Terminated *** 2026-03-09T21:24:23.865 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 21:24:23 vm07 bash[48824]: debug 2026-03-09T21:24:23.529+0000 7f76629c8640 -1 osd.3 393 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T21:24:28.864 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 21:24:28 vm07 bash[61555]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3-osd-3 2026-03-09T21:24:29.062 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@osd.3.service' 2026-03-09T21:24:29.074 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T21:24:29.074 INFO:tasks.cephadm.osd.3:Stopped osd.3 2026-03-09T21:24:29.074 INFO:tasks.cephadm.osd.4:Stopping osd.4... 2026-03-09T21:24:29.074 DEBUG:teuthology.orchestra.run.vm10:> sudo systemctl stop ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@osd.4 2026-03-09T21:24:29.442 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:24:29 vm10 systemd[1]: Stopping Ceph osd.4 for 22c897f4-1bfc-11f1-adaa-13127443f8b3... 2026-03-09T21:24:29.442 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:24:29 vm10 bash[26618]: debug 2026-03-09T21:24:29.121+0000 7fe971fc1640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.4 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T21:24:29.442 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:24:29 vm10 bash[26618]: debug 2026-03-09T21:24:29.121+0000 7fe971fc1640 -1 osd.4 393 *** Got signal Terminated *** 2026-03-09T21:24:29.442 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:24:29 vm10 bash[26618]: debug 2026-03-09T21:24:29.121+0000 7fe971fc1640 -1 osd.4 393 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T21:24:33.442 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:24:33 vm10 bash[32520]: debug 2026-03-09T21:24:33.133+0000 7fc218de3640 -1 osd.5 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:09.289278+0000 front 2026-03-09T21:24:09.289200+0000 (oldest deadline 2026-03-09T21:24:32.788785+0000) 2026-03-09T21:24:34.442 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:24:34 vm10 bash[52848]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3-osd-4 2026-03-09T21:24:34.442 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:24:34 vm10 bash[32520]: debug 2026-03-09T21:24:34.097+0000 7fc218de3640 -1 osd.5 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:09.289278+0000 front 2026-03-09T21:24:09.289200+0000 (oldest deadline 2026-03-09T21:24:32.788785+0000) 2026-03-09T21:24:34.540 DEBUG:teuthology.orchestra.run.vm10:> sudo pkill -f 'journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@osd.4.service' 2026-03-09T21:24:34.552 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T21:24:34.552 INFO:tasks.cephadm.osd.4:Stopped osd.4 2026-03-09T21:24:34.552 INFO:tasks.cephadm.osd.5:Stopping osd.5... 2026-03-09T21:24:34.552 DEBUG:teuthology.orchestra.run.vm10:> sudo systemctl stop ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@osd.5 2026-03-09T21:24:34.942 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:24:34 vm10 systemd[1]: Stopping Ceph osd.5 for 22c897f4-1bfc-11f1-adaa-13127443f8b3... 2026-03-09T21:24:34.964 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:24:34 vm10 bash[32520]: debug 2026-03-09T21:24:34.633+0000 7fc21c7ca640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.5 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T21:24:34.964 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:24:34 vm10 bash[32520]: debug 2026-03-09T21:24:34.633+0000 7fc21c7ca640 -1 osd.5 393 *** Got signal Terminated *** 2026-03-09T21:24:34.964 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:24:34 vm10 bash[32520]: debug 2026-03-09T21:24:34.633+0000 7fc21c7ca640 -1 osd.5 393 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T21:24:34.964 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:34 vm10 bash[44771]: debug 2026-03-09T21:24:34.441+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:10.786206+0000 front 2026-03-09T21:24:10.786078+0000 (oldest deadline 2026-03-09T21:24:34.285687+0000) 2026-03-09T21:24:35.442 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 09 21:24:35 vm10 bash[38557]: debug 2026-03-09T21:24:35.233+0000 7f07fa18d640 -1 osd.6 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:08.981618+0000 front 2026-03-09T21:24:08.981742+0000 (oldest deadline 2026-03-09T21:24:34.881575+0000) 2026-03-09T21:24:35.442 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:24:35 vm10 bash[32520]: debug 2026-03-09T21:24:35.129+0000 7fc218de3640 -1 osd.5 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:09.289278+0000 front 2026-03-09T21:24:09.289200+0000 (oldest deadline 2026-03-09T21:24:32.788785+0000) 2026-03-09T21:24:35.942 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:35 vm10 bash[44771]: debug 2026-03-09T21:24:35.485+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:10.786206+0000 front 2026-03-09T21:24:10.786078+0000 (oldest deadline 2026-03-09T21:24:34.285687+0000) 2026-03-09T21:24:36.442 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 09 21:24:36 vm10 bash[38557]: debug 2026-03-09T21:24:36.205+0000 7f07fa18d640 -1 osd.6 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:08.981618+0000 front 2026-03-09T21:24:08.981742+0000 (oldest deadline 2026-03-09T21:24:34.881575+0000) 2026-03-09T21:24:36.442 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:24:36 vm10 bash[32520]: debug 2026-03-09T21:24:36.181+0000 7fc218de3640 -1 osd.5 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:09.289278+0000 front 2026-03-09T21:24:09.289200+0000 (oldest deadline 2026-03-09T21:24:32.788785+0000) 2026-03-09T21:24:36.942 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:36 vm10 bash[44771]: debug 2026-03-09T21:24:36.481+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:10.786206+0000 front 2026-03-09T21:24:10.786078+0000 (oldest deadline 2026-03-09T21:24:34.285687+0000) 2026-03-09T21:24:37.442 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 09 21:24:37 vm10 bash[38557]: debug 2026-03-09T21:24:37.253+0000 7f07fa18d640 -1 osd.6 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:08.981618+0000 front 2026-03-09T21:24:08.981742+0000 (oldest deadline 2026-03-09T21:24:34.881575+0000) 2026-03-09T21:24:37.442 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:24:37 vm10 bash[32520]: debug 2026-03-09T21:24:37.177+0000 7fc218de3640 -1 osd.5 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:09.289278+0000 front 2026-03-09T21:24:09.289200+0000 (oldest deadline 2026-03-09T21:24:32.788785+0000) 2026-03-09T21:24:37.942 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:37 vm10 bash[44771]: debug 2026-03-09T21:24:37.465+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:10.786206+0000 front 2026-03-09T21:24:10.786078+0000 (oldest deadline 2026-03-09T21:24:34.285687+0000) 2026-03-09T21:24:38.482 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 09 21:24:38 vm10 bash[38557]: debug 2026-03-09T21:24:38.277+0000 7f07fa18d640 -1 osd.6 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:08.981618+0000 front 2026-03-09T21:24:08.981742+0000 (oldest deadline 2026-03-09T21:24:34.881575+0000) 2026-03-09T21:24:38.482 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:24:38 vm10 bash[32520]: debug 2026-03-09T21:24:38.217+0000 7fc218de3640 -1 osd.5 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:09.289278+0000 front 2026-03-09T21:24:09.289200+0000 (oldest deadline 2026-03-09T21:24:32.788785+0000) 2026-03-09T21:24:38.942 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:38 vm10 bash[44771]: debug 2026-03-09T21:24:38.481+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:10.786206+0000 front 2026-03-09T21:24:10.786078+0000 (oldest deadline 2026-03-09T21:24:34.285687+0000) 2026-03-09T21:24:39.441 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 09 21:24:39 vm10 bash[38557]: debug 2026-03-09T21:24:39.265+0000 7f07fa18d640 -1 osd.6 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:08.981618+0000 front 2026-03-09T21:24:08.981742+0000 (oldest deadline 2026-03-09T21:24:34.881575+0000) 2026-03-09T21:24:39.441 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:24:39 vm10 bash[32520]: debug 2026-03-09T21:24:39.185+0000 7fc218de3640 -1 osd.5 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:09.289278+0000 front 2026-03-09T21:24:09.289200+0000 (oldest deadline 2026-03-09T21:24:32.788785+0000) 2026-03-09T21:24:39.692 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:39 vm10 bash[44771]: debug 2026-03-09T21:24:39.437+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:10.786206+0000 front 2026-03-09T21:24:10.786078+0000 (oldest deadline 2026-03-09T21:24:34.285687+0000) 2026-03-09T21:24:39.692 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:39 vm10 bash[44771]: debug 2026-03-09T21:24:39.437+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6807 osd.1 since back 2026-03-09T21:24:14.286414+0000 front 2026-03-09T21:24:14.286147+0000 (oldest deadline 2026-03-09T21:24:38.985959+0000) 2026-03-09T21:24:39.968 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:24:39 vm10 bash[53035]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3-osd-5 2026-03-09T21:24:40.013 DEBUG:teuthology.orchestra.run.vm10:> sudo pkill -f 'journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@osd.5.service' 2026-03-09T21:24:40.024 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T21:24:40.024 INFO:tasks.cephadm.osd.5:Stopped osd.5 2026-03-09T21:24:40.024 INFO:tasks.cephadm.osd.6:Stopping osd.6... 2026-03-09T21:24:40.024 DEBUG:teuthology.orchestra.run.vm10:> sudo systemctl stop ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@osd.6 2026-03-09T21:24:40.281 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 09 21:24:40 vm10 systemd[1]: Stopping Ceph osd.6 for 22c897f4-1bfc-11f1-adaa-13127443f8b3... 2026-03-09T21:24:40.281 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 09 21:24:40 vm10 bash[38557]: debug 2026-03-09T21:24:40.109+0000 7f07fe375640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.6 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T21:24:40.282 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 09 21:24:40 vm10 bash[38557]: debug 2026-03-09T21:24:40.109+0000 7f07fe375640 -1 osd.6 393 *** Got signal Terminated *** 2026-03-09T21:24:40.282 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 09 21:24:40 vm10 bash[38557]: debug 2026-03-09T21:24:40.113+0000 7f07fe375640 -1 osd.6 393 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T21:24:40.692 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 09 21:24:40 vm10 bash[38557]: debug 2026-03-09T21:24:40.277+0000 7f07fa18d640 -1 osd.6 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:08.981618+0000 front 2026-03-09T21:24:08.981742+0000 (oldest deadline 2026-03-09T21:24:34.881575+0000) 2026-03-09T21:24:40.692 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:40 vm10 bash[44771]: debug 2026-03-09T21:24:40.449+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:10.786206+0000 front 2026-03-09T21:24:10.786078+0000 (oldest deadline 2026-03-09T21:24:34.285687+0000) 2026-03-09T21:24:40.692 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:40 vm10 bash[44771]: debug 2026-03-09T21:24:40.449+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6807 osd.1 since back 2026-03-09T21:24:14.286414+0000 front 2026-03-09T21:24:14.286147+0000 (oldest deadline 2026-03-09T21:24:38.985959+0000) 2026-03-09T21:24:41.692 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 09 21:24:41 vm10 bash[38557]: debug 2026-03-09T21:24:41.277+0000 7f07fa18d640 -1 osd.6 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:08.981618+0000 front 2026-03-09T21:24:08.981742+0000 (oldest deadline 2026-03-09T21:24:34.881575+0000) 2026-03-09T21:24:41.692 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 09 21:24:41 vm10 bash[38557]: debug 2026-03-09T21:24:41.277+0000 7f07fa18d640 -1 osd.6 393 heartbeat_check: no reply from 192.168.123.107:6807 osd.1 since back 2026-03-09T21:24:14.882184+0000 front 2026-03-09T21:24:14.882396+0000 (oldest deadline 2026-03-09T21:24:40.781809+0000) 2026-03-09T21:24:41.692 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:41 vm10 bash[44771]: debug 2026-03-09T21:24:41.409+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:10.786206+0000 front 2026-03-09T21:24:10.786078+0000 (oldest deadline 2026-03-09T21:24:34.285687+0000) 2026-03-09T21:24:41.692 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:41 vm10 bash[44771]: debug 2026-03-09T21:24:41.409+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6807 osd.1 since back 2026-03-09T21:24:14.286414+0000 front 2026-03-09T21:24:14.286147+0000 (oldest deadline 2026-03-09T21:24:38.985959+0000) 2026-03-09T21:24:42.692 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 09 21:24:42 vm10 bash[38557]: debug 2026-03-09T21:24:42.313+0000 7f07fa18d640 -1 osd.6 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:08.981618+0000 front 2026-03-09T21:24:08.981742+0000 (oldest deadline 2026-03-09T21:24:34.881575+0000) 2026-03-09T21:24:42.692 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 09 21:24:42 vm10 bash[38557]: debug 2026-03-09T21:24:42.313+0000 7f07fa18d640 -1 osd.6 393 heartbeat_check: no reply from 192.168.123.107:6807 osd.1 since back 2026-03-09T21:24:14.882184+0000 front 2026-03-09T21:24:14.882396+0000 (oldest deadline 2026-03-09T21:24:40.781809+0000) 2026-03-09T21:24:42.692 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:42 vm10 bash[44771]: debug 2026-03-09T21:24:42.397+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:10.786206+0000 front 2026-03-09T21:24:10.786078+0000 (oldest deadline 2026-03-09T21:24:34.285687+0000) 2026-03-09T21:24:42.693 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:42 vm10 bash[44771]: debug 2026-03-09T21:24:42.397+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6807 osd.1 since back 2026-03-09T21:24:14.286414+0000 front 2026-03-09T21:24:14.286147+0000 (oldest deadline 2026-03-09T21:24:38.985959+0000) 2026-03-09T21:24:43.692 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 09 21:24:43 vm10 bash[38557]: debug 2026-03-09T21:24:43.321+0000 7f07fa18d640 -1 osd.6 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:08.981618+0000 front 2026-03-09T21:24:08.981742+0000 (oldest deadline 2026-03-09T21:24:34.881575+0000) 2026-03-09T21:24:43.692 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 09 21:24:43 vm10 bash[38557]: debug 2026-03-09T21:24:43.321+0000 7f07fa18d640 -1 osd.6 393 heartbeat_check: no reply from 192.168.123.107:6807 osd.1 since back 2026-03-09T21:24:14.882184+0000 front 2026-03-09T21:24:14.882396+0000 (oldest deadline 2026-03-09T21:24:40.781809+0000) 2026-03-09T21:24:43.692 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:43 vm10 bash[44771]: debug 2026-03-09T21:24:43.389+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:10.786206+0000 front 2026-03-09T21:24:10.786078+0000 (oldest deadline 2026-03-09T21:24:34.285687+0000) 2026-03-09T21:24:43.692 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:43 vm10 bash[44771]: debug 2026-03-09T21:24:43.389+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6807 osd.1 since back 2026-03-09T21:24:14.286414+0000 front 2026-03-09T21:24:14.286147+0000 (oldest deadline 2026-03-09T21:24:38.985959+0000) 2026-03-09T21:24:44.692 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 09 21:24:44 vm10 bash[38557]: debug 2026-03-09T21:24:44.289+0000 7f07fa18d640 -1 osd.6 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:08.981618+0000 front 2026-03-09T21:24:08.981742+0000 (oldest deadline 2026-03-09T21:24:34.881575+0000) 2026-03-09T21:24:44.692 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 09 21:24:44 vm10 bash[38557]: debug 2026-03-09T21:24:44.289+0000 7f07fa18d640 -1 osd.6 393 heartbeat_check: no reply from 192.168.123.107:6807 osd.1 since back 2026-03-09T21:24:14.882184+0000 front 2026-03-09T21:24:14.882396+0000 (oldest deadline 2026-03-09T21:24:40.781809+0000) 2026-03-09T21:24:44.692 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 09 21:24:44 vm10 bash[38557]: debug 2026-03-09T21:24:44.289+0000 7f07fa18d640 -1 osd.6 393 heartbeat_check: no reply from 192.168.123.107:6811 osd.2 since back 2026-03-09T21:24:20.782257+0000 front 2026-03-09T21:24:20.782221+0000 (oldest deadline 2026-03-09T21:24:43.682047+0000) 2026-03-09T21:24:44.692 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:44 vm10 bash[44771]: debug 2026-03-09T21:24:44.437+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:10.786206+0000 front 2026-03-09T21:24:10.786078+0000 (oldest deadline 2026-03-09T21:24:34.285687+0000) 2026-03-09T21:24:44.692 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:44 vm10 bash[44771]: debug 2026-03-09T21:24:44.437+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6807 osd.1 since back 2026-03-09T21:24:14.286414+0000 front 2026-03-09T21:24:14.286147+0000 (oldest deadline 2026-03-09T21:24:38.985959+0000) 2026-03-09T21:24:45.442 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 09 21:24:45 vm10 bash[53213]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3-osd-6 2026-03-09T21:24:45.549 DEBUG:teuthology.orchestra.run.vm10:> sudo pkill -f 'journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@osd.6.service' 2026-03-09T21:24:45.563 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T21:24:45.563 INFO:tasks.cephadm.osd.6:Stopped osd.6 2026-03-09T21:24:45.563 INFO:tasks.cephadm.osd.7:Stopping osd.7... 2026-03-09T21:24:45.563 DEBUG:teuthology.orchestra.run.vm10:> sudo systemctl stop ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@osd.7 2026-03-09T21:24:45.942 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:45 vm10 bash[44771]: debug 2026-03-09T21:24:45.465+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:10.786206+0000 front 2026-03-09T21:24:10.786078+0000 (oldest deadline 2026-03-09T21:24:34.285687+0000) 2026-03-09T21:24:45.942 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:45 vm10 bash[44771]: debug 2026-03-09T21:24:45.465+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6807 osd.1 since back 2026-03-09T21:24:14.286414+0000 front 2026-03-09T21:24:14.286147+0000 (oldest deadline 2026-03-09T21:24:38.985959+0000) 2026-03-09T21:24:45.942 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:45 vm10 systemd[1]: Stopping Ceph osd.7 for 22c897f4-1bfc-11f1-adaa-13127443f8b3... 2026-03-09T21:24:45.942 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:45 vm10 bash[44771]: debug 2026-03-09T21:24:45.653+0000 7fa1ec2dd640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.7 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T21:24:45.942 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:45 vm10 bash[44771]: debug 2026-03-09T21:24:45.653+0000 7fa1ec2dd640 -1 osd.7 393 *** Got signal Terminated *** 2026-03-09T21:24:45.942 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:45 vm10 bash[44771]: debug 2026-03-09T21:24:45.653+0000 7fa1ec2dd640 -1 osd.7 393 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T21:24:46.942 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:46 vm10 bash[44771]: debug 2026-03-09T21:24:46.449+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:10.786206+0000 front 2026-03-09T21:24:10.786078+0000 (oldest deadline 2026-03-09T21:24:34.285687+0000) 2026-03-09T21:24:46.942 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:46 vm10 bash[44771]: debug 2026-03-09T21:24:46.449+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6807 osd.1 since back 2026-03-09T21:24:14.286414+0000 front 2026-03-09T21:24:14.286147+0000 (oldest deadline 2026-03-09T21:24:38.985959+0000) 2026-03-09T21:24:47.692 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:47 vm10 bash[44771]: debug 2026-03-09T21:24:47.413+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:10.786206+0000 front 2026-03-09T21:24:10.786078+0000 (oldest deadline 2026-03-09T21:24:34.285687+0000) 2026-03-09T21:24:47.692 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:47 vm10 bash[44771]: debug 2026-03-09T21:24:47.413+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6807 osd.1 since back 2026-03-09T21:24:14.286414+0000 front 2026-03-09T21:24:14.286147+0000 (oldest deadline 2026-03-09T21:24:38.985959+0000) 2026-03-09T21:24:47.692 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:47 vm10 bash[44771]: debug 2026-03-09T21:24:47.413+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6811 osd.2 since back 2026-03-09T21:24:22.486717+0000 front 2026-03-09T21:24:22.487082+0000 (oldest deadline 2026-03-09T21:24:46.586467+0000) 2026-03-09T21:24:48.692 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:48 vm10 bash[44771]: debug 2026-03-09T21:24:48.397+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:10.786206+0000 front 2026-03-09T21:24:10.786078+0000 (oldest deadline 2026-03-09T21:24:34.285687+0000) 2026-03-09T21:24:48.692 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:48 vm10 bash[44771]: debug 2026-03-09T21:24:48.397+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6807 osd.1 since back 2026-03-09T21:24:14.286414+0000 front 2026-03-09T21:24:14.286147+0000 (oldest deadline 2026-03-09T21:24:38.985959+0000) 2026-03-09T21:24:48.692 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:48 vm10 bash[44771]: debug 2026-03-09T21:24:48.397+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6811 osd.2 since back 2026-03-09T21:24:22.486717+0000 front 2026-03-09T21:24:22.487082+0000 (oldest deadline 2026-03-09T21:24:46.586467+0000) 2026-03-09T21:24:49.692 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:49 vm10 bash[44771]: debug 2026-03-09T21:24:49.381+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:10.786206+0000 front 2026-03-09T21:24:10.786078+0000 (oldest deadline 2026-03-09T21:24:34.285687+0000) 2026-03-09T21:24:49.692 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:49 vm10 bash[44771]: debug 2026-03-09T21:24:49.381+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6807 osd.1 since back 2026-03-09T21:24:14.286414+0000 front 2026-03-09T21:24:14.286147+0000 (oldest deadline 2026-03-09T21:24:38.985959+0000) 2026-03-09T21:24:49.692 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:49 vm10 bash[44771]: debug 2026-03-09T21:24:49.381+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6811 osd.2 since back 2026-03-09T21:24:22.486717+0000 front 2026-03-09T21:24:22.487082+0000 (oldest deadline 2026-03-09T21:24:46.586467+0000) 2026-03-09T21:24:50.692 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:50 vm10 bash[44771]: debug 2026-03-09T21:24:50.389+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6803 osd.0 since back 2026-03-09T21:24:10.786206+0000 front 2026-03-09T21:24:10.786078+0000 (oldest deadline 2026-03-09T21:24:34.285687+0000) 2026-03-09T21:24:50.692 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:50 vm10 bash[44771]: debug 2026-03-09T21:24:50.389+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6807 osd.1 since back 2026-03-09T21:24:14.286414+0000 front 2026-03-09T21:24:14.286147+0000 (oldest deadline 2026-03-09T21:24:38.985959+0000) 2026-03-09T21:24:50.692 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:50 vm10 bash[44771]: debug 2026-03-09T21:24:50.389+0000 7fa1e80f5640 -1 osd.7 393 heartbeat_check: no reply from 192.168.123.107:6811 osd.2 since back 2026-03-09T21:24:22.486717+0000 front 2026-03-09T21:24:22.487082+0000 (oldest deadline 2026-03-09T21:24:46.586467+0000) 2026-03-09T21:24:50.991 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 09 21:24:50 vm10 bash[53400]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3-osd-7 2026-03-09T21:24:51.035 DEBUG:teuthology.orchestra.run.vm10:> sudo pkill -f 'journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@osd.7.service' 2026-03-09T21:24:51.046 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T21:24:51.046 INFO:tasks.cephadm.osd.7:Stopped osd.7 2026-03-09T21:24:51.046 INFO:tasks.cephadm.ceph.rgw.foo.a:Stopping rgw.foo.a... 2026-03-09T21:24:51.046 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@rgw.foo.a 2026-03-09T21:24:51.364 INFO:journalctl@ceph.rgw.foo.a.vm07.stdout:Mar 09 21:24:51 vm07 systemd[1]: Stopping Ceph rgw.foo.a for 22c897f4-1bfc-11f1-adaa-13127443f8b3... 2026-03-09T21:24:51.364 INFO:journalctl@ceph.rgw.foo.a.vm07.stdout:Mar 09 21:24:51 vm07 bash[52961]: debug 2026-03-09T21:24:51.093+0000 7f0cc12af640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/radosgw -n client.rgw.foo.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T21:24:51.365 INFO:journalctl@ceph.rgw.foo.a.vm07.stdout:Mar 09 21:24:51 vm07 bash[52961]: debug 2026-03-09T21:24:51.093+0000 7f0cc4b1e980 -1 shutting down 2026-03-09T21:25:01.184 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@rgw.foo.a.service' 2026-03-09T21:25:01.196 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T21:25:01.196 INFO:tasks.cephadm.ceph.rgw.foo.a:Stopped rgw.foo.a 2026-03-09T21:25:01.196 INFO:tasks.cephadm.prometheus.a:Stopping prometheus.a... 2026-03-09T21:25:01.196 DEBUG:teuthology.orchestra.run.vm10:> sudo systemctl stop ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@prometheus.a 2026-03-09T21:25:01.309 DEBUG:teuthology.orchestra.run.vm10:> sudo pkill -f 'journalctl -f -n 0 -u ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@prometheus.a.service' 2026-03-09T21:25:01.309 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:25:01 vm10 systemd[1]: Stopping Ceph prometheus.a for 22c897f4-1bfc-11f1-adaa-13127443f8b3... 2026-03-09T21:25:01.309 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:25:01 vm10 bash[51847]: ts=2026-03-09T21:25:01.241Z caller=main.go:964 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-09T21:25:01.309 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:25:01 vm10 bash[51847]: ts=2026-03-09T21:25:01.241Z caller=main.go:988 level=info msg="Stopping scrape discovery manager..." 2026-03-09T21:25:01.309 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:25:01 vm10 bash[51847]: ts=2026-03-09T21:25:01.241Z caller=main.go:1002 level=info msg="Stopping notify discovery manager..." 2026-03-09T21:25:01.309 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:25:01 vm10 bash[51847]: ts=2026-03-09T21:25:01.241Z caller=manager.go:177 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-09T21:25:01.309 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:25:01 vm10 bash[51847]: ts=2026-03-09T21:25:01.241Z caller=manager.go:187 level=info component="rule manager" msg="Rule manager stopped" 2026-03-09T21:25:01.309 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:25:01 vm10 bash[51847]: ts=2026-03-09T21:25:01.241Z caller=main.go:1039 level=info msg="Stopping scrape manager..." 2026-03-09T21:25:01.309 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:25:01 vm10 bash[51847]: ts=2026-03-09T21:25:01.241Z caller=main.go:984 level=info msg="Scrape discovery manager stopped" 2026-03-09T21:25:01.309 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:25:01 vm10 bash[51847]: ts=2026-03-09T21:25:01.241Z caller=main.go:998 level=info msg="Notify discovery manager stopped" 2026-03-09T21:25:01.309 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:25:01 vm10 bash[51847]: ts=2026-03-09T21:25:01.241Z caller=main.go:1031 level=info msg="Scrape manager stopped" 2026-03-09T21:25:01.309 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:25:01 vm10 bash[51847]: ts=2026-03-09T21:25:01.245Z caller=notifier.go:618 level=info component=notifier msg="Stopping notification manager..." 2026-03-09T21:25:01.309 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:25:01 vm10 bash[51847]: ts=2026-03-09T21:25:01.245Z caller=main.go:1261 level=info msg="Notifier manager stopped" 2026-03-09T21:25:01.309 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:25:01 vm10 bash[51847]: ts=2026-03-09T21:25:01.245Z caller=main.go:1273 level=info msg="See you next time!" 2026-03-09T21:25:01.309 INFO:journalctl@ceph.prometheus.a.vm10.stdout:Mar 09 21:25:01 vm10 bash[53591]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3-prometheus-a 2026-03-09T21:25:01.319 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T21:25:01.319 INFO:tasks.cephadm.prometheus.a:Stopped prometheus.a 2026-03-09T21:25:01.319 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 --force --keep-logs 2026-03-09T21:25:01.421 INFO:teuthology.orchestra.run.vm07.stdout:Deleting cluster with fsid: 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:25:06.327 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:25:06 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:06.327 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:25:06 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:06.584 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:25:06 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:06.584 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:25:06 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:06.864 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:25:06 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:06.864 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:25:06 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:06.864 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:25:06 vm07 systemd[1]: Stopping Ceph alertmanager.a for 22c897f4-1bfc-11f1-adaa-13127443f8b3... 2026-03-09T21:25:06.864 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:25:06 vm07 bash[56094]: ts=2026-03-09T21:25:06.723Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..." 2026-03-09T21:25:06.864 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:25:06 vm07 bash[61974]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3-alertmanager-a 2026-03-09T21:25:06.864 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:25:06 vm07 systemd[1]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@alertmanager.a.service: Deactivated successfully. 2026-03-09T21:25:06.864 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:25:06 vm07 systemd[1]: Stopped Ceph alertmanager.a for 22c897f4-1bfc-11f1-adaa-13127443f8b3. 2026-03-09T21:25:07.246 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:25:06 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:07.246 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:25:07 vm07 systemd[1]: Stopping Ceph node-exporter.a for 22c897f4-1bfc-11f1-adaa-13127443f8b3... 2026-03-09T21:25:07.246 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:25:07 vm07 bash[62099]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3-node-exporter-a 2026-03-09T21:25:07.246 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:25:07 vm07 systemd[1]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@node-exporter.a.service: Main process exited, code=exited, status=143/n/a 2026-03-09T21:25:07.246 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:25:07 vm07 systemd[1]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@node-exporter.a.service: Failed with result 'exit-code'. 2026-03-09T21:25:07.246 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:25:07 vm07 systemd[1]: Stopped Ceph node-exporter.a for 22c897f4-1bfc-11f1-adaa-13127443f8b3. 2026-03-09T21:25:07.246 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 21:25:06 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:07.502 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 21:25:07 vm07 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:09.052 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 --force --keep-logs 2026-03-09T21:25:09.149 INFO:teuthology.orchestra.run.vm10.stdout:Deleting cluster with fsid: 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:25:13.942 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:25:13 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:13.942 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:25:13 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:13.942 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:25:13 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:14.263 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:25:14 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:14.263 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:25:14 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:14.263 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:25:14 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:14.539 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:25:14 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:14.539 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:25:14 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:14.539 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:25:14 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:14.790 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:25:14 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:14.790 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:25:14 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:14.790 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:25:14 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:15.192 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:25:14 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:15.192 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:25:14 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:15.192 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:25:14 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:15.192 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:25:14 vm10 systemd[1]: Stopping Ceph iscsi.iscsi.a for 22c897f4-1bfc-11f1-adaa-13127443f8b3... 2026-03-09T21:25:15.192 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:25:14 vm10 bash[48970]: debug Shutdown received 2026-03-09T21:25:25.191 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:25:24 vm10 bash[53901]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3-iscsi-iscsi-a 2026-03-09T21:25:25.191 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:25:24 vm10 systemd[1]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@iscsi.iscsi.a.service: Main process exited, code=exited, status=137/n/a 2026-03-09T21:25:25.191 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:25:25 vm10 systemd[1]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@iscsi.iscsi.a.service: Failed with result 'exit-code'. 2026-03-09T21:25:25.191 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:25:25 vm10 systemd[1]: Stopped Ceph iscsi.iscsi.a for 22c897f4-1bfc-11f1-adaa-13127443f8b3. 2026-03-09T21:25:25.442 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:25:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:25.442 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:25:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:25.442 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:25:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:25.442 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:25:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:25.442 INFO:journalctl@ceph.iscsi.iscsi.a.vm10.stdout:Mar 09 21:25:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:25.442 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:25:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:25.442 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:25:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:25.442 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:25:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:25.443 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:25:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:25.772 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:25:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:25.773 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:25:25 vm10 systemd[1]: Stopping Ceph grafana.a for 22c897f4-1bfc-11f1-adaa-13127443f8b3... 2026-03-09T21:25:25.773 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:25:25 vm10 bash[51199]: logger=server t=2026-03-09T21:25:25.540278408Z level=info msg="Shutdown started" reason="System signal: terminated" 2026-03-09T21:25:25.773 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:25:25 vm10 bash[51199]: logger=tracing t=2026-03-09T21:25:25.540474845Z level=info msg="Closing tracing" 2026-03-09T21:25:25.773 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:25:25 vm10 bash[51199]: logger=ticker t=2026-03-09T21:25:25.541010238Z level=info msg=stopped last_tick=2026-03-09T21:25:20Z 2026-03-09T21:25:25.773 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:25:25 vm10 bash[51199]: logger=grafana-apiserver t=2026-03-09T21:25:25.541548105Z level=info msg="StorageObjectCountTracker pruner is exiting" 2026-03-09T21:25:25.773 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:25:25 vm10 bash[54066]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3-grafana-a 2026-03-09T21:25:25.773 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:25:25 vm10 systemd[1]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@grafana.a.service: Deactivated successfully. 2026-03-09T21:25:25.773 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:25:25 vm10 systemd[1]: Stopped Ceph grafana.a for 22c897f4-1bfc-11f1-adaa-13127443f8b3. 2026-03-09T21:25:25.773 INFO:journalctl@ceph.grafana.a.vm10.stdout:Mar 09 21:25:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:26.066 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:25:25 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:26.067 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:25:26 vm10 systemd[1]: Stopping Ceph node-exporter.b for 22c897f4-1bfc-11f1-adaa-13127443f8b3... 2026-03-09T21:25:26.332 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:25:26 vm10 bash[54229]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3-node-exporter-b 2026-03-09T21:25:26.333 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:25:26 vm10 systemd[1]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@node-exporter.b.service: Main process exited, code=exited, status=143/n/a 2026-03-09T21:25:26.333 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:25:26 vm10 bash[54284]: Error response from daemon: No such container: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3-node-exporter-b 2026-03-09T21:25:26.333 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:25:26 vm10 systemd[1]: ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@node-exporter.b.service: Failed with result 'exit-code'. 2026-03-09T21:25:26.333 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:25:26 vm10 systemd[1]: Stopped Ceph node-exporter.b for 22c897f4-1bfc-11f1-adaa-13127443f8b3. 2026-03-09T21:25:26.683 INFO:journalctl@ceph.node-exporter.b.vm10.stdout:Mar 09 21:25:26 vm10 systemd[1]: /etc/systemd/system/ceph-22c897f4-1bfc-11f1-adaa-13127443f8b3@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T21:25:27.259 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T21:25:27.267 INFO:teuthology.orchestra.run.vm07.stderr:rm: cannot remove '/etc/ceph/ceph.client.admin.keyring': Is a directory 2026-03-09T21:25:27.267 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T21:25:27.267 DEBUG:teuthology.orchestra.run.vm10:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T21:25:27.277 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-09T21:25:27.277 DEBUG:teuthology.misc:Transferring archived files from vm07:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/crash to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/658/remote/vm07/crash 2026-03-09T21:25:27.277 DEBUG:teuthology.orchestra.run.vm07:> sudo tar c -f - -C /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/crash -- . 2026-03-09T21:25:27.316 INFO:teuthology.orchestra.run.vm07.stderr:tar: /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/crash: Cannot open: No such file or directory 2026-03-09T21:25:27.316 INFO:teuthology.orchestra.run.vm07.stderr:tar: Error is not recoverable: exiting now 2026-03-09T21:25:27.317 DEBUG:teuthology.misc:Transferring archived files from vm10:/var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/crash to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/658/remote/vm10/crash 2026-03-09T21:25:27.317 DEBUG:teuthology.orchestra.run.vm10:> sudo tar c -f - -C /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/crash -- . 2026-03-09T21:25:27.326 INFO:teuthology.orchestra.run.vm10.stderr:tar: /var/lib/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/crash: Cannot open: No such file or directory 2026-03-09T21:25:27.326 INFO:teuthology.orchestra.run.vm10.stderr:tar: Error is not recoverable: exiting now 2026-03-09T21:25:27.326 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-09T21:25:27.326 DEBUG:teuthology.orchestra.run.vm07:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v 'but it is still running' | egrep -v 'overall HEALTH_' | egrep -v '\(OSDMAP_FLAGS\)' | egrep -v '\(PG_' | egrep -v '\(OSD_' | egrep -v '\(OBJECT_' | egrep -v '\(POOL_APP_NOT_ENABLED\)' | head -n 1 2026-03-09T21:25:27.372 INFO:tasks.cephadm:Compressing logs... 2026-03-09T21:25:27.392 DEBUG:teuthology.orchestra.run.vm07:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T21:25:27.417 DEBUG:teuthology.orchestra.run.vm10:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T21:25:27.424 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-09T21:25:27.434 INFO:teuthology.orchestra.run.vm07.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-09T21:25:27.435 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-osd.3.log 2026-03-09T21:25:27.435 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph.log 2026-03-09T21:25:27.435 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-osd.3.log: 90.5% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-09T21:25:27.435 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-mon.c.log 2026-03-09T21:25:27.435 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph.log: 92.7% -- replaced with /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph.log.gz 2026-03-09T21:25:27.436 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-osd.1.log 2026-03-09T21:25:27.436 INFO:teuthology.orchestra.run.vm10.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-09T21:25:27.436 INFO:teuthology.orchestra.run.vm10.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-09T21:25:27.436 INFO:teuthology.orchestra.run.vm10.stderr:gzip -5 --verbose -- /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-mgr.x.log 2026-03-09T21:25:27.436 INFO:teuthology.orchestra.run.vm10.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph.log 2026-03-09T21:25:27.436 INFO:teuthology.orchestra.run.vm10.stderr:gzip -5 --verbose -- /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-mon.b.log 2026-03-09T21:25:27.436 INFO:teuthology.orchestra.run.vm10.stderr:/var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-mgr.x.log: /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph.log: gzip -5 --verbose -- /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-osd.5.log 2026-03-09T21:25:27.436 INFO:teuthology.orchestra.run.vm10.stderr:/var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-mon.b.log: 91.5% -- replaced with /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-mgr.x.log.gz 2026-03-09T21:25:27.436 INFO:teuthology.orchestra.run.vm10.stderr:gzip -5 --verbose -- /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-osd.7.log 2026-03-09T21:25:27.436 INFO:teuthology.orchestra.run.vm10.stderr:/var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-osd.5.log: 87.0% -- replaced with /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph.log.gz 2026-03-09T21:25:27.437 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-mon.c.log: gzip -5 --verbose -- /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-mgr.y.log 2026-03-09T21:25:27.443 INFO:teuthology.orchestra.run.vm10.stderr: 90.7% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-09T21:25:27.444 INFO:teuthology.orchestra.run.vm10.stderr:gzip -5 --verbose -- /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-osd.6.log 2026-03-09T21:25:27.449 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-osd.1.log: gzip -5 --verbose -- /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-mon.a.log 2026-03-09T21:25:27.460 INFO:teuthology.orchestra.run.vm10.stderr:/var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-osd.7.log: gzip -5 --verbose -- /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph.audit.log 2026-03-09T21:25:27.461 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-mgr.y.log: gzip -5 --verbose -- /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-osd.2.log 2026-03-09T21:25:27.461 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-mon.a.log: gzip -5 --verbose -- /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph.audit.log 2026-03-09T21:25:27.461 INFO:teuthology.orchestra.run.vm10.stderr:/var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-osd.6.log: gzip -5 --verbose -- /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-volume.log 2026-03-09T21:25:27.464 INFO:teuthology.orchestra.run.vm10.stderr:/var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph.audit.log: 90.4% -- replaced with /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph.audit.log.gz 2026-03-09T21:25:27.474 INFO:teuthology.orchestra.run.vm10.stderr:gzip -5 --verbose -- /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph.cephadm.log 2026-03-09T21:25:27.477 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-osd.2.log: gzip -5 --verbose -- /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-volume.log 2026-03-09T21:25:27.478 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph.audit.log: gzip -5 --verbose -- /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-client.rgw.foo.a.log 2026-03-09T21:25:27.480 INFO:teuthology.orchestra.run.vm07.stderr: 94.1%/var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-volume.log: -- replaced with /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph.audit.log.gz 2026-03-09T21:25:27.485 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph.cephadm.log 2026-03-09T21:25:27.499 INFO:teuthology.orchestra.run.vm10.stderr:/var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-osd.4.log 2026-03-09T21:25:27.499 INFO:teuthology.orchestra.run.vm10.stderr:/var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph.cephadm.log: 80.0% -- replaced with /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph.cephadm.log.gz 2026-03-09T21:25:27.499 INFO:teuthology.orchestra.run.vm10.stderr:gzip -5 --verbose -- /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/tcmu-runner.log 2026-03-09T21:25:27.499 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-client.rgw.foo.a.log: 59.3% -- replaced with /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-client.rgw.foo.a.log.gz 2026-03-09T21:25:27.499 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-osd.0.log 2026-03-09T21:25:27.499 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph.cephadm.log: 88.6% -- replaced with /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph.cephadm.log.gz 2026-03-09T21:25:27.506 INFO:teuthology.orchestra.run.vm10.stderr:/var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-osd.4.log: 96.0% -- replaced with /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-volume.log.gz 2026-03-09T21:25:27.512 INFO:teuthology.orchestra.run.vm10.stderr:/var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/tcmu-runner.log: 72.8% -- replaced with /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/tcmu-runner.log.gz 2026-03-09T21:25:27.525 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-osd.0.log: 96.1% -- replaced with /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-volume.log.gz 2026-03-09T21:25:27.821 INFO:teuthology.orchestra.run.vm07.stderr: 89.7% -- replaced with /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-mgr.y.log.gz 2026-03-09T21:25:27.973 INFO:teuthology.orchestra.run.vm10.stderr: 92.3% -- replaced with /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-mon.b.log.gz 2026-03-09T21:25:28.117 INFO:teuthology.orchestra.run.vm07.stderr: 92.1% -- replaced with /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-mon.c.log.gz 2026-03-09T21:25:28.815 INFO:teuthology.orchestra.run.vm07.stderr: 91.5% -- replaced with /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-mon.a.log.gz 2026-03-09T21:25:30.002 INFO:teuthology.orchestra.run.vm10.stderr: 94.7% -- replaced with /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-osd.6.log.gz 2026-03-09T21:25:30.105 INFO:teuthology.orchestra.run.vm10.stderr: 94.7% -- replaced with /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-osd.5.log.gz 2026-03-09T21:25:30.152 INFO:teuthology.orchestra.run.vm10.stderr: 94.7% -- replaced with /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-osd.4.log.gz 2026-03-09T21:25:30.196 INFO:teuthology.orchestra.run.vm07.stderr: 94.7% -- replaced with /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-osd.2.log.gz 2026-03-09T21:25:30.210 INFO:teuthology.orchestra.run.vm10.stderr: 94.9% -- replaced with /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-osd.7.log.gz 2026-03-09T21:25:30.212 INFO:teuthology.orchestra.run.vm10.stderr: 2026-03-09T21:25:30.212 INFO:teuthology.orchestra.run.vm10.stderr:real 0m2.791s 2026-03-09T21:25:30.212 INFO:teuthology.orchestra.run.vm10.stderr:user 0m5.204s 2026-03-09T21:25:30.212 INFO:teuthology.orchestra.run.vm10.stderr:sys 0m0.308s 2026-03-09T21:25:30.347 INFO:teuthology.orchestra.run.vm07.stderr: 94.7% -- replaced with /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-osd.1.log.gz 2026-03-09T21:25:30.418 INFO:teuthology.orchestra.run.vm07.stderr: 94.8% -- replaced with /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-osd.0.log.gz 2026-03-09T21:25:30.515 INFO:teuthology.orchestra.run.vm07.stderr: 94.7% -- replaced with /var/log/ceph/22c897f4-1bfc-11f1-adaa-13127443f8b3/ceph-osd.3.log.gz 2026-03-09T21:25:30.517 INFO:teuthology.orchestra.run.vm07.stderr: 2026-03-09T21:25:30.517 INFO:teuthology.orchestra.run.vm07.stderr:real 0m3.098s 2026-03-09T21:25:30.517 INFO:teuthology.orchestra.run.vm07.stderr:user 0m5.765s 2026-03-09T21:25:30.517 INFO:teuthology.orchestra.run.vm07.stderr:sys 0m0.318s 2026-03-09T21:25:30.517 INFO:tasks.cephadm:Archiving logs... 2026-03-09T21:25:30.517 DEBUG:teuthology.misc:Transferring archived files from vm07:/var/log/ceph to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/658/remote/vm07/log 2026-03-09T21:25:30.517 DEBUG:teuthology.orchestra.run.vm07:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-09T21:25:30.818 DEBUG:teuthology.misc:Transferring archived files from vm10:/var/log/ceph to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/658/remote/vm10/log 2026-03-09T21:25:30.819 DEBUG:teuthology.orchestra.run.vm10:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-09T21:25:31.068 INFO:tasks.cephadm:Removing cluster... 2026-03-09T21:25:31.068 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 --force 2026-03-09T21:25:31.168 INFO:teuthology.orchestra.run.vm07.stdout:Deleting cluster with fsid: 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:25:32.477 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 22c897f4-1bfc-11f1-adaa-13127443f8b3 --force 2026-03-09T21:25:32.574 INFO:teuthology.orchestra.run.vm10.stdout:Deleting cluster with fsid: 22c897f4-1bfc-11f1-adaa-13127443f8b3 2026-03-09T21:25:33.841 INFO:tasks.cephadm:Removing cephadm ... 2026-03-09T21:25:33.841 DEBUG:teuthology.orchestra.run.vm07:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-09T21:25:33.845 DEBUG:teuthology.orchestra.run.vm10:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-09T21:25:33.852 INFO:tasks.cephadm:Teardown complete 2026-03-09T21:25:33.852 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-09T21:25:33.873 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-09T21:25:33.874 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-09T21:25:33.890 DEBUG:teuthology.orchestra.run.vm10:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-09T21:25:33.912 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-09T21:25:33.912 DEBUG:teuthology.orchestra.run.vm07:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-09T21:25:33.917 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-09T21:25:33.917 DEBUG:teuthology.orchestra.run.vm10:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-09T21:25:33.983 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-09T21:25:33.985 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-09T21:25:34.172 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-09T21:25:34.173 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-09T21:25:34.211 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-09T21:25:34.212 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-09T21:25:34.338 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:25:34.338 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T21:25:34.339 INFO:teuthology.orchestra.run.vm10.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T21:25:34.339 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:25:34.356 INFO:teuthology.orchestra.run.vm10.stdout:The following packages will be REMOVED: 2026-03-09T21:25:34.358 INFO:teuthology.orchestra.run.vm10.stdout: ceph* 2026-03-09T21:25:34.439 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:25:34.440 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T21:25:34.441 INFO:teuthology.orchestra.run.vm07.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T21:25:34.441 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:25:34.460 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-09T21:25:34.461 INFO:teuthology.orchestra.run.vm07.stdout: ceph* 2026-03-09T21:25:34.623 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T21:25:34.623 INFO:teuthology.orchestra.run.vm10.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-09T21:25:34.648 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T21:25:34.648 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-09T21:25:34.662 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118605 files and directories currently installed.) 2026-03-09T21:25:34.663 INFO:teuthology.orchestra.run.vm10.stdout:Removing ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:34.694 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118605 files and directories currently installed.) 2026-03-09T21:25:34.697 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:35.929 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:25:35.956 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:25:35.967 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-09T21:25:35.993 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-09T21:25:36.204 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-09T21:25:36.204 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-09T21:25:36.227 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-09T21:25:36.228 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-09T21:25:36.491 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:25:36.492 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T21:25:36.493 INFO:teuthology.orchestra.run.vm10.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-09T21:25:36.493 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:25:36.500 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:25:36.501 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T21:25:36.503 INFO:teuthology.orchestra.run.vm07.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-09T21:25:36.503 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:25:36.513 INFO:teuthology.orchestra.run.vm10.stdout:The following packages will be REMOVED: 2026-03-09T21:25:36.514 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-cephadm* cephadm* 2026-03-09T21:25:36.519 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-09T21:25:36.520 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-cephadm* cephadm* 2026-03-09T21:25:36.711 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-09T21:25:36.711 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 1775 kB disk space will be freed. 2026-03-09T21:25:36.718 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-09T21:25:36.718 INFO:teuthology.orchestra.run.vm10.stdout:After this operation, 1775 kB disk space will be freed. 2026-03-09T21:25:36.748 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118603 files and directories currently installed.) 2026-03-09T21:25:36.751 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:36.764 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118603 files and directories currently installed.) 2026-03-09T21:25:36.767 INFO:teuthology.orchestra.run.vm10.stdout:Removing ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:36.782 INFO:teuthology.orchestra.run.vm07.stdout:Removing cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:36.786 INFO:teuthology.orchestra.run.vm10.stdout:Removing cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:36.813 INFO:teuthology.orchestra.run.vm07.stdout:Looking for files to backup/remove ... 2026-03-09T21:25:36.815 INFO:teuthology.orchestra.run.vm07.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-09T21:25:36.816 INFO:teuthology.orchestra.run.vm10.stdout:Looking for files to backup/remove ... 2026-03-09T21:25:36.817 INFO:teuthology.orchestra.run.vm07.stdout:Removing user `cephadm' ... 2026-03-09T21:25:36.818 INFO:teuthology.orchestra.run.vm07.stdout:Warning: group `nogroup' has no more members. 2026-03-09T21:25:36.818 INFO:teuthology.orchestra.run.vm10.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-09T21:25:36.821 INFO:teuthology.orchestra.run.vm10.stdout:Removing user `cephadm' ... 2026-03-09T21:25:36.821 INFO:teuthology.orchestra.run.vm10.stdout:Warning: group `nogroup' has no more members. 2026-03-09T21:25:36.833 INFO:teuthology.orchestra.run.vm07.stdout:Done. 2026-03-09T21:25:36.834 INFO:teuthology.orchestra.run.vm10.stdout:Done. 2026-03-09T21:25:36.858 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T21:25:36.859 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T21:25:36.962 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-09T21:25:36.964 INFO:teuthology.orchestra.run.vm07.stdout:Purging configuration files for cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:36.966 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-09T21:25:36.968 INFO:teuthology.orchestra.run.vm10.stdout:Purging configuration files for cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:38.051 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:25:38.086 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-09T21:25:38.184 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:25:38.221 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-09T21:25:38.309 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-09T21:25:38.310 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-09T21:25:38.385 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-09T21:25:38.385 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-09T21:25:38.568 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:25:38.568 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T21:25:38.569 INFO:teuthology.orchestra.run.vm07.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-09T21:25:38.569 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:25:38.585 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-09T21:25:38.586 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mds* 2026-03-09T21:25:38.625 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:25:38.626 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T21:25:38.628 INFO:teuthology.orchestra.run.vm10.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-09T21:25:38.628 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:25:38.643 INFO:teuthology.orchestra.run.vm10.stdout:The following packages will be REMOVED: 2026-03-09T21:25:38.643 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mds* 2026-03-09T21:25:38.801 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T21:25:38.801 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 7437 kB disk space will be freed. 2026-03-09T21:25:38.824 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T21:25:38.824 INFO:teuthology.orchestra.run.vm10.stdout:After this operation, 7437 kB disk space will be freed. 2026-03-09T21:25:38.846 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-09T21:25:38.849 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:38.876 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-09T21:25:38.879 INFO:teuthology.orchestra.run.vm10.stdout:Removing ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:39.283 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T21:25:39.319 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T21:25:39.394 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-09T21:25:39.397 INFO:teuthology.orchestra.run.vm07.stdout:Purging configuration files for ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:39.415 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-09T21:25:39.417 INFO:teuthology.orchestra.run.vm10.stdout:Purging configuration files for ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:40.838 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:25:40.875 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-09T21:25:41.102 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-09T21:25:41.103 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-09T21:25:41.210 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:25:41.248 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-09T21:25:41.329 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:25:41.330 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core ceph-mon kpartx libboost-iostreams1.74.0 2026-03-09T21:25:41.330 INFO:teuthology.orchestra.run.vm10.stdout: libboost-thread1.74.0 libpmemobj1 libsgutils2-2 python-asyncssh-doc 2026-03-09T21:25:41.330 INFO:teuthology.orchestra.run.vm10.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools python3-cheroot 2026-03-09T21:25:41.330 INFO:teuthology.orchestra.run.vm10.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T21:25:41.330 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T21:25:41.330 INFO:teuthology.orchestra.run.vm10.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T21:25:41.330 INFO:teuthology.orchestra.run.vm10.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T21:25:41.330 INFO:teuthology.orchestra.run.vm10.stdout: python3-pecan python3-portend python3-psutil python3-pyinotify 2026-03-09T21:25:41.330 INFO:teuthology.orchestra.run.vm10.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T21:25:41.330 INFO:teuthology.orchestra.run.vm10.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T21:25:41.330 INFO:teuthology.orchestra.run.vm10.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T21:25:41.330 INFO:teuthology.orchestra.run.vm10.stdout: python3-threadpoolctl python3-waitress python3-webob python3-websocket 2026-03-09T21:25:41.330 INFO:teuthology.orchestra.run.vm10.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T21:25:41.330 INFO:teuthology.orchestra.run.vm10.stdout: sg3-utils-udev 2026-03-09T21:25:41.330 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:25:41.351 INFO:teuthology.orchestra.run.vm10.stdout:The following packages will be REMOVED: 2026-03-09T21:25:41.351 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-09T21:25:41.352 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-k8sevents* 2026-03-09T21:25:41.479 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-09T21:25:41.479 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-09T21:25:41.527 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 4 to remove and 10 not upgraded. 2026-03-09T21:25:41.527 INFO:teuthology.orchestra.run.vm10.stdout:After this operation, 165 MB disk space will be freed. 2026-03-09T21:25:41.578 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-09T21:25:41.582 INFO:teuthology.orchestra.run.vm10.stdout:Removing ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:41.595 INFO:teuthology.orchestra.run.vm10.stdout:Removing ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:41.634 INFO:teuthology.orchestra.run.vm10.stdout:Removing ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:41.679 INFO:teuthology.orchestra.run.vm10.stdout:Removing ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:41.733 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:25:41.733 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core ceph-mon kpartx libboost-iostreams1.74.0 2026-03-09T21:25:41.734 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libpmemobj1 libsgutils2-2 python-asyncssh-doc 2026-03-09T21:25:41.734 INFO:teuthology.orchestra.run.vm07.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools python3-cheroot 2026-03-09T21:25:41.734 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T21:25:41.734 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T21:25:41.734 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T21:25:41.734 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T21:25:41.734 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-psutil python3-pyinotify 2026-03-09T21:25:41.734 INFO:teuthology.orchestra.run.vm07.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T21:25:41.734 INFO:teuthology.orchestra.run.vm07.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T21:25:41.734 INFO:teuthology.orchestra.run.vm07.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T21:25:41.734 INFO:teuthology.orchestra.run.vm07.stdout: python3-threadpoolctl python3-waitress python3-webob python3-websocket 2026-03-09T21:25:41.734 INFO:teuthology.orchestra.run.vm07.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T21:25:41.735 INFO:teuthology.orchestra.run.vm07.stdout: sg3-utils-udev 2026-03-09T21:25:41.735 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:25:41.751 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-09T21:25:41.751 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-09T21:25:41.752 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-k8sevents* 2026-03-09T21:25:41.964 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 4 to remove and 10 not upgraded. 2026-03-09T21:25:41.964 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 165 MB disk space will be freed. 2026-03-09T21:25:42.008 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-09T21:25:42.011 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:42.026 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:42.054 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:42.103 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:42.218 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-09T21:25:42.221 INFO:teuthology.orchestra.run.vm10.stdout:Purging configuration files for ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:42.650 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-09T21:25:42.652 INFO:teuthology.orchestra.run.vm07.stdout:Purging configuration files for ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:43.909 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:25:43.945 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-09T21:25:44.178 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-09T21:25:44.179 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-09T21:25:44.387 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:25:44.388 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T21:25:44.388 INFO:teuthology.orchestra.run.vm10.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T21:25:44.388 INFO:teuthology.orchestra.run.vm10.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T21:25:44.388 INFO:teuthology.orchestra.run.vm10.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T21:25:44.388 INFO:teuthology.orchestra.run.vm10.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T21:25:44.388 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T21:25:44.388 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T21:25:44.388 INFO:teuthology.orchestra.run.vm10.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T21:25:44.388 INFO:teuthology.orchestra.run.vm10.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T21:25:44.388 INFO:teuthology.orchestra.run.vm10.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T21:25:44.388 INFO:teuthology.orchestra.run.vm10.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T21:25:44.388 INFO:teuthology.orchestra.run.vm10.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T21:25:44.388 INFO:teuthology.orchestra.run.vm10.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T21:25:44.388 INFO:teuthology.orchestra.run.vm10.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T21:25:44.388 INFO:teuthology.orchestra.run.vm10.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T21:25:44.388 INFO:teuthology.orchestra.run.vm10.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T21:25:44.388 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:25:44.402 INFO:teuthology.orchestra.run.vm10.stdout:The following packages will be REMOVED: 2026-03-09T21:25:44.403 INFO:teuthology.orchestra.run.vm10.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-09T21:25:44.431 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:25:44.469 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-09T21:25:44.601 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-09T21:25:44.601 INFO:teuthology.orchestra.run.vm10.stdout:After this operation, 472 MB disk space will be freed. 2026-03-09T21:25:44.635 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-09T21:25:44.637 INFO:teuthology.orchestra.run.vm10.stdout:Removing ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:44.665 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-09T21:25:44.666 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-09T21:25:44.723 INFO:teuthology.orchestra.run.vm10.stdout:Removing ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:44.773 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:25:44.773 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T21:25:44.773 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T21:25:44.773 INFO:teuthology.orchestra.run.vm07.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T21:25:44.773 INFO:teuthology.orchestra.run.vm07.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T21:25:44.773 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T21:25:44.773 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T21:25:44.773 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T21:25:44.773 INFO:teuthology.orchestra.run.vm07.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T21:25:44.774 INFO:teuthology.orchestra.run.vm07.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T21:25:44.774 INFO:teuthology.orchestra.run.vm07.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T21:25:44.774 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T21:25:44.774 INFO:teuthology.orchestra.run.vm07.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T21:25:44.774 INFO:teuthology.orchestra.run.vm07.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T21:25:44.774 INFO:teuthology.orchestra.run.vm07.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T21:25:44.774 INFO:teuthology.orchestra.run.vm07.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T21:25:44.774 INFO:teuthology.orchestra.run.vm07.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T21:25:44.774 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:25:44.790 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-09T21:25:44.791 INFO:teuthology.orchestra.run.vm07.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-09T21:25:44.967 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-09T21:25:44.967 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 472 MB disk space will be freed. 2026-03-09T21:25:45.001 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-09T21:25:45.002 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:45.065 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:45.163 INFO:teuthology.orchestra.run.vm10.stdout:Removing ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:45.507 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:45.714 INFO:teuthology.orchestra.run.vm10.stdout:Removing ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:45.950 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:46.156 INFO:teuthology.orchestra.run.vm10.stdout:Removing radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:46.392 INFO:teuthology.orchestra.run.vm07.stdout:Removing radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:46.621 INFO:teuthology.orchestra.run.vm10.stdout:Removing ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:46.662 INFO:teuthology.orchestra.run.vm10.stdout:Removing ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:46.844 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:46.949 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:47.132 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T21:25:47.169 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T21:25:47.241 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117455 files and directories currently installed.) 2026-03-09T21:25:47.243 INFO:teuthology.orchestra.run.vm10.stdout:Purging configuration files for radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:47.462 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T21:25:47.565 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T21:25:47.640 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117455 files and directories currently installed.) 2026-03-09T21:25:47.642 INFO:teuthology.orchestra.run.vm07.stdout:Purging configuration files for radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:47.870 INFO:teuthology.orchestra.run.vm10.stdout:Purging configuration files for ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:48.326 INFO:teuthology.orchestra.run.vm10.stdout:Purging configuration files for ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:48.329 INFO:teuthology.orchestra.run.vm07.stdout:Purging configuration files for ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:48.768 INFO:teuthology.orchestra.run.vm10.stdout:Purging configuration files for ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:48.775 INFO:teuthology.orchestra.run.vm07.stdout:Purging configuration files for ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:49.190 INFO:teuthology.orchestra.run.vm07.stdout:Purging configuration files for ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:49.295 INFO:teuthology.orchestra.run.vm10.stdout:Purging configuration files for ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:49.733 INFO:teuthology.orchestra.run.vm07.stdout:Purging configuration files for ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:50.915 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:25:50.954 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-09T21:25:51.123 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-09T21:25:51.123 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-09T21:25:51.231 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:25:51.231 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T21:25:51.231 INFO:teuthology.orchestra.run.vm10.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T21:25:51.231 INFO:teuthology.orchestra.run.vm10.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T21:25:51.231 INFO:teuthology.orchestra.run.vm10.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T21:25:51.231 INFO:teuthology.orchestra.run.vm10.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T21:25:51.231 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T21:25:51.231 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T21:25:51.231 INFO:teuthology.orchestra.run.vm10.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T21:25:51.231 INFO:teuthology.orchestra.run.vm10.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T21:25:51.231 INFO:teuthology.orchestra.run.vm10.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T21:25:51.231 INFO:teuthology.orchestra.run.vm10.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T21:25:51.231 INFO:teuthology.orchestra.run.vm10.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T21:25:51.231 INFO:teuthology.orchestra.run.vm10.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T21:25:51.231 INFO:teuthology.orchestra.run.vm10.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T21:25:51.231 INFO:teuthology.orchestra.run.vm10.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T21:25:51.231 INFO:teuthology.orchestra.run.vm10.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T21:25:51.231 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:25:51.238 INFO:teuthology.orchestra.run.vm10.stdout:The following packages will be REMOVED: 2026-03-09T21:25:51.239 INFO:teuthology.orchestra.run.vm10.stdout: ceph-fuse* 2026-03-09T21:25:51.372 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:25:51.411 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T21:25:51.411 INFO:teuthology.orchestra.run.vm10.stdout:After this operation, 3673 kB disk space will be freed. 2026-03-09T21:25:51.412 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-09T21:25:51.447 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117443 files and directories currently installed.) 2026-03-09T21:25:51.449 INFO:teuthology.orchestra.run.vm10.stdout:Removing ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:51.638 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-09T21:25:51.639 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-09T21:25:51.827 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:25:51.827 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T21:25:51.827 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T21:25:51.827 INFO:teuthology.orchestra.run.vm07.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T21:25:51.827 INFO:teuthology.orchestra.run.vm07.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T21:25:51.827 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T21:25:51.827 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T21:25:51.827 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T21:25:51.827 INFO:teuthology.orchestra.run.vm07.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T21:25:51.827 INFO:teuthology.orchestra.run.vm07.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T21:25:51.827 INFO:teuthology.orchestra.run.vm07.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T21:25:51.827 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T21:25:51.827 INFO:teuthology.orchestra.run.vm07.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T21:25:51.827 INFO:teuthology.orchestra.run.vm07.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T21:25:51.827 INFO:teuthology.orchestra.run.vm07.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T21:25:51.827 INFO:teuthology.orchestra.run.vm07.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T21:25:51.827 INFO:teuthology.orchestra.run.vm07.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T21:25:51.827 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:25:51.840 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-09T21:25:51.842 INFO:teuthology.orchestra.run.vm07.stdout: ceph-fuse* 2026-03-09T21:25:51.893 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T21:25:51.997 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-09T21:25:52.000 INFO:teuthology.orchestra.run.vm10.stdout:Purging configuration files for ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:52.034 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T21:25:52.034 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 3673 kB disk space will be freed. 2026-03-09T21:25:52.078 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117443 files and directories currently installed.) 2026-03-09T21:25:52.081 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:52.551 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T21:25:52.654 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-09T21:25:52.656 INFO:teuthology.orchestra.run.vm07.stdout:Purging configuration files for ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:53.632 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:25:53.671 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-09T21:25:53.898 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-09T21:25:53.899 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-09T21:25:54.109 INFO:teuthology.orchestra.run.vm10.stdout:Package 'ceph-test' is not installed, so not removed 2026-03-09T21:25:54.109 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:25:54.109 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T21:25:54.109 INFO:teuthology.orchestra.run.vm10.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T21:25:54.110 INFO:teuthology.orchestra.run.vm10.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T21:25:54.110 INFO:teuthology.orchestra.run.vm10.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T21:25:54.110 INFO:teuthology.orchestra.run.vm10.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T21:25:54.110 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T21:25:54.110 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T21:25:54.110 INFO:teuthology.orchestra.run.vm10.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T21:25:54.110 INFO:teuthology.orchestra.run.vm10.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T21:25:54.110 INFO:teuthology.orchestra.run.vm10.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T21:25:54.110 INFO:teuthology.orchestra.run.vm10.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T21:25:54.110 INFO:teuthology.orchestra.run.vm10.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T21:25:54.110 INFO:teuthology.orchestra.run.vm10.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T21:25:54.110 INFO:teuthology.orchestra.run.vm10.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T21:25:54.110 INFO:teuthology.orchestra.run.vm10.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T21:25:54.110 INFO:teuthology.orchestra.run.vm10.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T21:25:54.110 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:25:54.137 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T21:25:54.137 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:25:54.169 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-09T21:25:54.332 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-09T21:25:54.333 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-09T21:25:54.370 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:25:54.406 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-09T21:25:54.548 INFO:teuthology.orchestra.run.vm10.stdout:Package 'ceph-volume' is not installed, so not removed 2026-03-09T21:25:54.548 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:25:54.549 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T21:25:54.549 INFO:teuthology.orchestra.run.vm10.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T21:25:54.550 INFO:teuthology.orchestra.run.vm10.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T21:25:54.550 INFO:teuthology.orchestra.run.vm10.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T21:25:54.550 INFO:teuthology.orchestra.run.vm10.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T21:25:54.550 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T21:25:54.550 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T21:25:54.550 INFO:teuthology.orchestra.run.vm10.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T21:25:54.550 INFO:teuthology.orchestra.run.vm10.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T21:25:54.550 INFO:teuthology.orchestra.run.vm10.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T21:25:54.550 INFO:teuthology.orchestra.run.vm10.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T21:25:54.550 INFO:teuthology.orchestra.run.vm10.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T21:25:54.550 INFO:teuthology.orchestra.run.vm10.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T21:25:54.550 INFO:teuthology.orchestra.run.vm10.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T21:25:54.550 INFO:teuthology.orchestra.run.vm10.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T21:25:54.550 INFO:teuthology.orchestra.run.vm10.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T21:25:54.550 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:25:54.581 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T21:25:54.581 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:25:54.614 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-09T21:25:54.629 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-09T21:25:54.630 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-09T21:25:54.777 INFO:teuthology.orchestra.run.vm07.stdout:Package 'ceph-test' is not installed, so not removed 2026-03-09T21:25:54.777 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:25:54.777 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T21:25:54.777 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T21:25:54.777 INFO:teuthology.orchestra.run.vm07.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T21:25:54.777 INFO:teuthology.orchestra.run.vm07.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T21:25:54.777 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T21:25:54.778 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T21:25:54.778 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T21:25:54.778 INFO:teuthology.orchestra.run.vm07.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T21:25:54.778 INFO:teuthology.orchestra.run.vm07.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T21:25:54.778 INFO:teuthology.orchestra.run.vm07.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T21:25:54.778 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T21:25:54.778 INFO:teuthology.orchestra.run.vm07.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T21:25:54.778 INFO:teuthology.orchestra.run.vm07.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T21:25:54.778 INFO:teuthology.orchestra.run.vm07.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T21:25:54.778 INFO:teuthology.orchestra.run.vm07.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T21:25:54.778 INFO:teuthology.orchestra.run.vm07.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T21:25:54.778 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:25:54.792 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T21:25:54.793 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:25:54.828 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-09T21:25:54.838 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-09T21:25:54.838 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-09T21:25:55.055 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-09T21:25:55.056 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-09T21:25:55.079 INFO:teuthology.orchestra.run.vm10.stdout:Package 'radosgw' is not installed, so not removed 2026-03-09T21:25:55.079 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:25:55.079 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T21:25:55.079 INFO:teuthology.orchestra.run.vm10.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T21:25:55.080 INFO:teuthology.orchestra.run.vm10.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T21:25:55.080 INFO:teuthology.orchestra.run.vm10.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T21:25:55.080 INFO:teuthology.orchestra.run.vm10.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T21:25:55.080 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T21:25:55.080 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T21:25:55.080 INFO:teuthology.orchestra.run.vm10.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T21:25:55.080 INFO:teuthology.orchestra.run.vm10.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T21:25:55.081 INFO:teuthology.orchestra.run.vm10.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T21:25:55.081 INFO:teuthology.orchestra.run.vm10.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T21:25:55.081 INFO:teuthology.orchestra.run.vm10.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T21:25:55.081 INFO:teuthology.orchestra.run.vm10.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T21:25:55.081 INFO:teuthology.orchestra.run.vm10.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T21:25:55.081 INFO:teuthology.orchestra.run.vm10.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T21:25:55.081 INFO:teuthology.orchestra.run.vm10.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T21:25:55.081 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:25:55.109 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T21:25:55.109 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:25:55.142 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-09T21:25:55.311 INFO:teuthology.orchestra.run.vm07.stdout:Package 'ceph-volume' is not installed, so not removed 2026-03-09T21:25:55.311 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:25:55.311 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T21:25:55.311 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T21:25:55.312 INFO:teuthology.orchestra.run.vm07.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T21:25:55.312 INFO:teuthology.orchestra.run.vm07.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T21:25:55.312 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T21:25:55.312 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T21:25:55.312 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T21:25:55.312 INFO:teuthology.orchestra.run.vm07.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T21:25:55.312 INFO:teuthology.orchestra.run.vm07.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T21:25:55.312 INFO:teuthology.orchestra.run.vm07.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T21:25:55.313 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T21:25:55.313 INFO:teuthology.orchestra.run.vm07.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T21:25:55.313 INFO:teuthology.orchestra.run.vm07.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T21:25:55.313 INFO:teuthology.orchestra.run.vm07.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T21:25:55.313 INFO:teuthology.orchestra.run.vm07.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T21:25:55.313 INFO:teuthology.orchestra.run.vm07.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T21:25:55.313 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:25:55.342 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T21:25:55.343 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:25:55.376 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-09T21:25:55.376 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-09T21:25:55.377 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-09T21:25:55.588 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:25:55.588 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T21:25:55.589 INFO:teuthology.orchestra.run.vm10.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T21:25:55.589 INFO:teuthology.orchestra.run.vm10.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T21:25:55.589 INFO:teuthology.orchestra.run.vm10.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T21:25:55.590 INFO:teuthology.orchestra.run.vm10.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T21:25:55.590 INFO:teuthology.orchestra.run.vm10.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T21:25:55.590 INFO:teuthology.orchestra.run.vm10.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T21:25:55.590 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T21:25:55.590 INFO:teuthology.orchestra.run.vm10.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T21:25:55.590 INFO:teuthology.orchestra.run.vm10.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T21:25:55.590 INFO:teuthology.orchestra.run.vm10.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T21:25:55.590 INFO:teuthology.orchestra.run.vm10.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T21:25:55.590 INFO:teuthology.orchestra.run.vm10.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T21:25:55.590 INFO:teuthology.orchestra.run.vm10.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T21:25:55.590 INFO:teuthology.orchestra.run.vm10.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T21:25:55.590 INFO:teuthology.orchestra.run.vm10.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T21:25:55.590 INFO:teuthology.orchestra.run.vm10.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T21:25:55.590 INFO:teuthology.orchestra.run.vm10.stdout: xmlstarlet zip 2026-03-09T21:25:55.590 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:25:55.606 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-09T21:25:55.607 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-09T21:25:55.609 INFO:teuthology.orchestra.run.vm10.stdout:The following packages will be REMOVED: 2026-03-09T21:25:55.610 INFO:teuthology.orchestra.run.vm10.stdout: python3-cephfs* python3-rados* python3-rgw* 2026-03-09T21:25:55.831 INFO:teuthology.orchestra.run.vm07.stdout:Package 'radosgw' is not installed, so not removed 2026-03-09T21:25:55.831 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:25:55.831 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T21:25:55.831 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T21:25:55.832 INFO:teuthology.orchestra.run.vm07.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T21:25:55.832 INFO:teuthology.orchestra.run.vm07.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T21:25:55.832 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T21:25:55.832 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T21:25:55.832 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T21:25:55.832 INFO:teuthology.orchestra.run.vm07.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T21:25:55.832 INFO:teuthology.orchestra.run.vm07.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T21:25:55.832 INFO:teuthology.orchestra.run.vm07.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T21:25:55.832 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T21:25:55.833 INFO:teuthology.orchestra.run.vm07.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T21:25:55.833 INFO:teuthology.orchestra.run.vm07.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T21:25:55.833 INFO:teuthology.orchestra.run.vm07.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T21:25:55.833 INFO:teuthology.orchestra.run.vm07.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T21:25:55.833 INFO:teuthology.orchestra.run.vm07.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T21:25:55.833 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:25:55.840 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 3 to remove and 10 not upgraded. 2026-03-09T21:25:55.840 INFO:teuthology.orchestra.run.vm10.stdout:After this operation, 2062 kB disk space will be freed. 2026-03-09T21:25:55.857 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T21:25:55.858 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:25:55.890 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-09T21:25:55.893 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-09T21:25:55.893 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:56.004 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:56.018 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:56.112 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-09T21:25:56.112 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-09T21:25:56.402 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:25:56.402 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T21:25:56.402 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T21:25:56.402 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T21:25:56.403 INFO:teuthology.orchestra.run.vm07.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T21:25:56.403 INFO:teuthology.orchestra.run.vm07.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T21:25:56.403 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T21:25:56.403 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T21:25:56.403 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T21:25:56.403 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T21:25:56.403 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T21:25:56.403 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T21:25:56.403 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T21:25:56.403 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T21:25:56.403 INFO:teuthology.orchestra.run.vm07.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T21:25:56.403 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T21:25:56.403 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T21:25:56.403 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T21:25:56.403 INFO:teuthology.orchestra.run.vm07.stdout: xmlstarlet zip 2026-03-09T21:25:56.403 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:25:56.420 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-09T21:25:56.420 INFO:teuthology.orchestra.run.vm07.stdout: python3-cephfs* python3-rados* python3-rgw* 2026-03-09T21:25:56.611 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 3 to remove and 10 not upgraded. 2026-03-09T21:25:56.611 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 2062 kB disk space will be freed. 2026-03-09T21:25:56.658 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-09T21:25:56.660 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:56.672 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:56.684 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:57.298 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:25:57.337 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-09T21:25:57.551 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-09T21:25:57.551 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-09T21:25:57.755 INFO:teuthology.orchestra.run.vm10.stdout:Package 'python3-rgw' is not installed, so not removed 2026-03-09T21:25:57.755 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:25:57.755 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T21:25:57.755 INFO:teuthology.orchestra.run.vm10.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T21:25:57.755 INFO:teuthology.orchestra.run.vm10.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T21:25:57.756 INFO:teuthology.orchestra.run.vm10.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T21:25:57.756 INFO:teuthology.orchestra.run.vm10.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T21:25:57.756 INFO:teuthology.orchestra.run.vm10.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T21:25:57.756 INFO:teuthology.orchestra.run.vm10.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T21:25:57.756 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T21:25:57.756 INFO:teuthology.orchestra.run.vm10.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T21:25:57.756 INFO:teuthology.orchestra.run.vm10.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T21:25:57.756 INFO:teuthology.orchestra.run.vm10.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T21:25:57.756 INFO:teuthology.orchestra.run.vm10.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T21:25:57.756 INFO:teuthology.orchestra.run.vm10.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T21:25:57.756 INFO:teuthology.orchestra.run.vm10.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T21:25:57.756 INFO:teuthology.orchestra.run.vm10.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T21:25:57.756 INFO:teuthology.orchestra.run.vm10.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T21:25:57.756 INFO:teuthology.orchestra.run.vm10.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T21:25:57.756 INFO:teuthology.orchestra.run.vm10.stdout: xmlstarlet zip 2026-03-09T21:25:57.756 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:25:57.784 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T21:25:57.784 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:25:57.818 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-09T21:25:57.978 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:25:58.014 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-09T21:25:58.045 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-09T21:25:58.045 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-09T21:25:58.135 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-09T21:25:58.135 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-09T21:25:58.211 INFO:teuthology.orchestra.run.vm10.stdout:Package 'python3-cephfs' is not installed, so not removed 2026-03-09T21:25:58.211 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:25:58.211 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T21:25:58.211 INFO:teuthology.orchestra.run.vm10.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T21:25:58.211 INFO:teuthology.orchestra.run.vm10.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T21:25:58.212 INFO:teuthology.orchestra.run.vm10.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T21:25:58.212 INFO:teuthology.orchestra.run.vm10.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T21:25:58.212 INFO:teuthology.orchestra.run.vm10.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T21:25:58.212 INFO:teuthology.orchestra.run.vm10.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T21:25:58.212 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T21:25:58.212 INFO:teuthology.orchestra.run.vm10.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T21:25:58.212 INFO:teuthology.orchestra.run.vm10.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T21:25:58.212 INFO:teuthology.orchestra.run.vm10.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T21:25:58.212 INFO:teuthology.orchestra.run.vm10.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T21:25:58.212 INFO:teuthology.orchestra.run.vm10.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T21:25:58.212 INFO:teuthology.orchestra.run.vm10.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T21:25:58.212 INFO:teuthology.orchestra.run.vm10.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T21:25:58.212 INFO:teuthology.orchestra.run.vm10.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T21:25:58.212 INFO:teuthology.orchestra.run.vm10.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T21:25:58.212 INFO:teuthology.orchestra.run.vm10.stdout: xmlstarlet zip 2026-03-09T21:25:58.212 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:25:58.232 INFO:teuthology.orchestra.run.vm07.stdout:Package 'python3-rgw' is not installed, so not removed 2026-03-09T21:25:58.232 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:25:58.232 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T21:25:58.232 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T21:25:58.232 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T21:25:58.233 INFO:teuthology.orchestra.run.vm07.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T21:25:58.233 INFO:teuthology.orchestra.run.vm07.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T21:25:58.233 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T21:25:58.233 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T21:25:58.233 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T21:25:58.233 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T21:25:58.233 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T21:25:58.233 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T21:25:58.233 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T21:25:58.233 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T21:25:58.233 INFO:teuthology.orchestra.run.vm07.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T21:25:58.233 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T21:25:58.233 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T21:25:58.233 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T21:25:58.233 INFO:teuthology.orchestra.run.vm07.stdout: xmlstarlet zip 2026-03-09T21:25:58.233 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:25:58.235 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T21:25:58.235 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:25:58.253 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T21:25:58.253 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:25:58.269 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-09T21:25:58.289 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-09T21:25:58.481 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-09T21:25:58.481 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-09T21:25:58.497 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-09T21:25:58.498 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-09T21:25:58.684 INFO:teuthology.orchestra.run.vm07.stdout:Package 'python3-cephfs' is not installed, so not removed 2026-03-09T21:25:58.684 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:25:58.684 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T21:25:58.684 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T21:25:58.684 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T21:25:58.685 INFO:teuthology.orchestra.run.vm07.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T21:25:58.685 INFO:teuthology.orchestra.run.vm07.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T21:25:58.685 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T21:25:58.685 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T21:25:58.685 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T21:25:58.686 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T21:25:58.686 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T21:25:58.686 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T21:25:58.686 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T21:25:58.686 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T21:25:58.686 INFO:teuthology.orchestra.run.vm07.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T21:25:58.686 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T21:25:58.686 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T21:25:58.686 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T21:25:58.686 INFO:teuthology.orchestra.run.vm07.stdout: xmlstarlet zip 2026-03-09T21:25:58.686 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:25:58.727 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T21:25:58.727 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:25:58.741 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:25:58.742 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T21:25:58.742 INFO:teuthology.orchestra.run.vm10.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T21:25:58.742 INFO:teuthology.orchestra.run.vm10.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T21:25:58.743 INFO:teuthology.orchestra.run.vm10.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T21:25:58.743 INFO:teuthology.orchestra.run.vm10.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T21:25:58.743 INFO:teuthology.orchestra.run.vm10.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T21:25:58.743 INFO:teuthology.orchestra.run.vm10.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T21:25:58.743 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T21:25:58.743 INFO:teuthology.orchestra.run.vm10.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T21:25:58.744 INFO:teuthology.orchestra.run.vm10.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T21:25:58.744 INFO:teuthology.orchestra.run.vm10.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T21:25:58.744 INFO:teuthology.orchestra.run.vm10.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T21:25:58.744 INFO:teuthology.orchestra.run.vm10.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T21:25:58.744 INFO:teuthology.orchestra.run.vm10.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T21:25:58.744 INFO:teuthology.orchestra.run.vm10.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T21:25:58.744 INFO:teuthology.orchestra.run.vm10.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T21:25:58.744 INFO:teuthology.orchestra.run.vm10.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T21:25:58.744 INFO:teuthology.orchestra.run.vm10.stdout: xmlstarlet zip 2026-03-09T21:25:58.744 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:25:58.764 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-09T21:25:58.770 INFO:teuthology.orchestra.run.vm10.stdout:The following packages will be REMOVED: 2026-03-09T21:25:58.770 INFO:teuthology.orchestra.run.vm10.stdout: python3-rbd* 2026-03-09T21:25:58.960 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T21:25:58.960 INFO:teuthology.orchestra.run.vm10.stdout:After this operation, 1186 kB disk space will be freed. 2026-03-09T21:25:58.971 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-09T21:25:58.972 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-09T21:25:59.010 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117410 files and directories currently installed.) 2026-03-09T21:25:59.012 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:25:59.234 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:25:59.235 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T21:25:59.235 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T21:25:59.235 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T21:25:59.236 INFO:teuthology.orchestra.run.vm07.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T21:25:59.236 INFO:teuthology.orchestra.run.vm07.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T21:25:59.236 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T21:25:59.236 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T21:25:59.236 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T21:25:59.236 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T21:25:59.236 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T21:25:59.236 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T21:25:59.236 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T21:25:59.236 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T21:25:59.236 INFO:teuthology.orchestra.run.vm07.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T21:25:59.236 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T21:25:59.236 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T21:25:59.236 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T21:25:59.236 INFO:teuthology.orchestra.run.vm07.stdout: xmlstarlet zip 2026-03-09T21:25:59.236 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:25:59.252 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-09T21:25:59.252 INFO:teuthology.orchestra.run.vm07.stdout: python3-rbd* 2026-03-09T21:25:59.432 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T21:25:59.432 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 1186 kB disk space will be freed. 2026-03-09T21:25:59.466 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117410 files and directories currently installed.) 2026-03-09T21:25:59.467 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:26:00.272 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:26:00.309 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-09T21:26:00.538 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-09T21:26:00.539 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-09T21:26:00.733 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:26:00.745 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:26:00.745 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T21:26:00.746 INFO:teuthology.orchestra.run.vm10.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T21:26:00.746 INFO:teuthology.orchestra.run.vm10.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T21:26:00.746 INFO:teuthology.orchestra.run.vm10.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T21:26:00.746 INFO:teuthology.orchestra.run.vm10.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T21:26:00.746 INFO:teuthology.orchestra.run.vm10.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T21:26:00.746 INFO:teuthology.orchestra.run.vm10.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T21:26:00.746 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T21:26:00.746 INFO:teuthology.orchestra.run.vm10.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T21:26:00.746 INFO:teuthology.orchestra.run.vm10.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T21:26:00.746 INFO:teuthology.orchestra.run.vm10.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T21:26:00.746 INFO:teuthology.orchestra.run.vm10.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T21:26:00.746 INFO:teuthology.orchestra.run.vm10.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T21:26:00.746 INFO:teuthology.orchestra.run.vm10.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T21:26:00.746 INFO:teuthology.orchestra.run.vm10.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T21:26:00.746 INFO:teuthology.orchestra.run.vm10.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T21:26:00.747 INFO:teuthology.orchestra.run.vm10.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T21:26:00.747 INFO:teuthology.orchestra.run.vm10.stdout: xmlstarlet zip 2026-03-09T21:26:00.747 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:26:00.759 INFO:teuthology.orchestra.run.vm10.stdout:The following packages will be REMOVED: 2026-03-09T21:26:00.760 INFO:teuthology.orchestra.run.vm10.stdout: libcephfs-dev* libcephfs2* 2026-03-09T21:26:00.768 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-09T21:26:00.985 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-09T21:26:00.986 INFO:teuthology.orchestra.run.vm10.stdout:After this operation, 3202 kB disk space will be freed. 2026-03-09T21:26:00.987 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-09T21:26:00.988 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-09T21:26:01.027 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117402 files and directories currently installed.) 2026-03-09T21:26:01.030 INFO:teuthology.orchestra.run.vm10.stdout:Removing libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:26:01.042 INFO:teuthology.orchestra.run.vm10.stdout:Removing libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:26:01.069 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T21:26:01.198 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:26:01.198 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T21:26:01.198 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T21:26:01.198 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T21:26:01.199 INFO:teuthology.orchestra.run.vm07.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T21:26:01.200 INFO:teuthology.orchestra.run.vm07.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T21:26:01.200 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T21:26:01.200 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T21:26:01.200 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T21:26:01.200 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T21:26:01.200 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T21:26:01.200 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T21:26:01.200 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T21:26:01.200 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T21:26:01.200 INFO:teuthology.orchestra.run.vm07.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T21:26:01.200 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T21:26:01.200 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T21:26:01.200 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T21:26:01.200 INFO:teuthology.orchestra.run.vm07.stdout: xmlstarlet zip 2026-03-09T21:26:01.200 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:26:01.217 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-09T21:26:01.219 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs-dev* libcephfs2* 2026-03-09T21:26:01.408 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-09T21:26:01.408 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 3202 kB disk space will be freed. 2026-03-09T21:26:01.454 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117402 files and directories currently installed.) 2026-03-09T21:26:01.457 INFO:teuthology.orchestra.run.vm07.stdout:Removing libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:26:01.473 INFO:teuthology.orchestra.run.vm07.stdout:Removing libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:26:01.499 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T21:26:02.295 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:26:02.330 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-09T21:26:02.551 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-09T21:26:02.552 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-09T21:26:02.737 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:26:02.773 INFO:teuthology.orchestra.run.vm10.stdout:Package 'libcephfs-dev' is not installed, so not removed 2026-03-09T21:26:02.773 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:26:02.773 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T21:26:02.774 INFO:teuthology.orchestra.run.vm10.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T21:26:02.774 INFO:teuthology.orchestra.run.vm10.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T21:26:02.774 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-09T21:26:02.775 INFO:teuthology.orchestra.run.vm10.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T21:26:02.775 INFO:teuthology.orchestra.run.vm10.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T21:26:02.775 INFO:teuthology.orchestra.run.vm10.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T21:26:02.775 INFO:teuthology.orchestra.run.vm10.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T21:26:02.775 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T21:26:02.775 INFO:teuthology.orchestra.run.vm10.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T21:26:02.775 INFO:teuthology.orchestra.run.vm10.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T21:26:02.775 INFO:teuthology.orchestra.run.vm10.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T21:26:02.775 INFO:teuthology.orchestra.run.vm10.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T21:26:02.775 INFO:teuthology.orchestra.run.vm10.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T21:26:02.775 INFO:teuthology.orchestra.run.vm10.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T21:26:02.775 INFO:teuthology.orchestra.run.vm10.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T21:26:02.775 INFO:teuthology.orchestra.run.vm10.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T21:26:02.775 INFO:teuthology.orchestra.run.vm10.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T21:26:02.775 INFO:teuthology.orchestra.run.vm10.stdout: xmlstarlet zip 2026-03-09T21:26:02.775 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:26:02.800 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T21:26:02.801 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:26:02.834 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-09T21:26:02.949 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-09T21:26:02.950 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-09T21:26:03.070 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-09T21:26:03.071 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-09T21:26:03.167 INFO:teuthology.orchestra.run.vm07.stdout:Package 'libcephfs-dev' is not installed, so not removed 2026-03-09T21:26:03.167 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:26:03.167 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T21:26:03.168 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T21:26:03.168 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T21:26:03.169 INFO:teuthology.orchestra.run.vm07.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T21:26:03.169 INFO:teuthology.orchestra.run.vm07.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T21:26:03.169 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T21:26:03.169 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T21:26:03.169 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T21:26:03.169 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T21:26:03.169 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T21:26:03.169 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T21:26:03.169 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T21:26:03.169 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T21:26:03.169 INFO:teuthology.orchestra.run.vm07.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T21:26:03.169 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T21:26:03.169 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T21:26:03.169 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T21:26:03.169 INFO:teuthology.orchestra.run.vm07.stdout: xmlstarlet zip 2026-03-09T21:26:03.169 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:26:03.200 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T21:26:03.200 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:26:03.238 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-09T21:26:03.304 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:26:03.304 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T21:26:03.304 INFO:teuthology.orchestra.run.vm10.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T21:26:03.304 INFO:teuthology.orchestra.run.vm10.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T21:26:03.304 INFO:teuthology.orchestra.run.vm10.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T21:26:03.304 INFO:teuthology.orchestra.run.vm10.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T21:26:03.304 INFO:teuthology.orchestra.run.vm10.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T21:26:03.305 INFO:teuthology.orchestra.run.vm10.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T21:26:03.305 INFO:teuthology.orchestra.run.vm10.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T21:26:03.305 INFO:teuthology.orchestra.run.vm10.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T21:26:03.305 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T21:26:03.305 INFO:teuthology.orchestra.run.vm10.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T21:26:03.305 INFO:teuthology.orchestra.run.vm10.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T21:26:03.305 INFO:teuthology.orchestra.run.vm10.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T21:26:03.305 INFO:teuthology.orchestra.run.vm10.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T21:26:03.305 INFO:teuthology.orchestra.run.vm10.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T21:26:03.305 INFO:teuthology.orchestra.run.vm10.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T21:26:03.305 INFO:teuthology.orchestra.run.vm10.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T21:26:03.305 INFO:teuthology.orchestra.run.vm10.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T21:26:03.305 INFO:teuthology.orchestra.run.vm10.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T21:26:03.305 INFO:teuthology.orchestra.run.vm10.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T21:26:03.305 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:26:03.320 INFO:teuthology.orchestra.run.vm10.stdout:The following packages will be REMOVED: 2026-03-09T21:26:03.320 INFO:teuthology.orchestra.run.vm10.stdout: librados2* libradosstriper1* librbd1* librgw2* libsqlite3-mod-ceph* 2026-03-09T21:26:03.320 INFO:teuthology.orchestra.run.vm10.stdout: qemu-block-extra* rbd-fuse* 2026-03-09T21:26:03.406 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-09T21:26:03.407 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-09T21:26:03.503 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-09T21:26:03.503 INFO:teuthology.orchestra.run.vm10.stdout:After this operation, 51.6 MB disk space will be freed. 2026-03-09T21:26:03.534 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:26:03.534 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T21:26:03.534 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T21:26:03.534 INFO:teuthology.orchestra.run.vm07.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T21:26:03.534 INFO:teuthology.orchestra.run.vm07.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T21:26:03.534 INFO:teuthology.orchestra.run.vm07.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T21:26:03.535 INFO:teuthology.orchestra.run.vm07.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T21:26:03.535 INFO:teuthology.orchestra.run.vm07.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T21:26:03.535 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T21:26:03.535 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T21:26:03.535 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T21:26:03.535 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T21:26:03.535 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T21:26:03.535 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T21:26:03.535 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T21:26:03.535 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T21:26:03.535 INFO:teuthology.orchestra.run.vm07.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T21:26:03.535 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T21:26:03.536 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T21:26:03.536 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T21:26:03.536 INFO:teuthology.orchestra.run.vm07.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T21:26:03.536 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:26:03.537 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117387 files and directories currently installed.) 2026-03-09T21:26:03.538 INFO:teuthology.orchestra.run.vm10.stdout:Removing rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:26:03.550 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-09T21:26:03.550 INFO:teuthology.orchestra.run.vm10.stdout:Removing libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:26:03.550 INFO:teuthology.orchestra.run.vm07.stdout: librados2* libradosstriper1* librbd1* librgw2* libsqlite3-mod-ceph* 2026-03-09T21:26:03.550 INFO:teuthology.orchestra.run.vm07.stdout: qemu-block-extra* rbd-fuse* 2026-03-09T21:26:03.563 INFO:teuthology.orchestra.run.vm10.stdout:Removing libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:26:03.575 INFO:teuthology.orchestra.run.vm10.stdout:Removing qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-09T21:26:03.762 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-09T21:26:03.762 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 51.6 MB disk space will be freed. 2026-03-09T21:26:03.808 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117387 files and directories currently installed.) 2026-03-09T21:26:03.810 INFO:teuthology.orchestra.run.vm07.stdout:Removing rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:26:03.823 INFO:teuthology.orchestra.run.vm07.stdout:Removing libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:26:03.835 INFO:teuthology.orchestra.run.vm07.stdout:Removing libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:26:03.846 INFO:teuthology.orchestra.run.vm07.stdout:Removing qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-09T21:26:04.043 INFO:teuthology.orchestra.run.vm10.stdout:Removing librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:26:04.059 INFO:teuthology.orchestra.run.vm10.stdout:Removing librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:26:04.370 INFO:teuthology.orchestra.run.vm10.stdout:Removing librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:26:04.376 INFO:teuthology.orchestra.run.vm07.stdout:Removing librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:26:04.393 INFO:teuthology.orchestra.run.vm07.stdout:Removing librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:26:04.403 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T21:26:04.409 INFO:teuthology.orchestra.run.vm07.stdout:Removing librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:26:04.441 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T21:26:04.453 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T21:26:04.476 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T21:26:04.533 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-09T21:26:04.536 INFO:teuthology.orchestra.run.vm10.stdout:Purging configuration files for qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-09T21:26:04.549 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-09T21:26:04.552 INFO:teuthology.orchestra.run.vm07.stdout:Purging configuration files for qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-09T21:26:06.188 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:26:06.202 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:26:06.228 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-09T21:26:06.239 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-09T21:26:06.377 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-09T21:26:06.378 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-09T21:26:06.433 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-09T21:26:06.434 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-09T21:26:06.481 INFO:teuthology.orchestra.run.vm10.stdout:Package 'librbd1' is not installed, so not removed 2026-03-09T21:26:06.481 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:26:06.481 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T21:26:06.481 INFO:teuthology.orchestra.run.vm10.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T21:26:06.481 INFO:teuthology.orchestra.run.vm10.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T21:26:06.481 INFO:teuthology.orchestra.run.vm10.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T21:26:06.481 INFO:teuthology.orchestra.run.vm10.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T21:26:06.481 INFO:teuthology.orchestra.run.vm10.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T21:26:06.481 INFO:teuthology.orchestra.run.vm10.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T21:26:06.481 INFO:teuthology.orchestra.run.vm10.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T21:26:06.481 INFO:teuthology.orchestra.run.vm10.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T21:26:06.481 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T21:26:06.481 INFO:teuthology.orchestra.run.vm10.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T21:26:06.481 INFO:teuthology.orchestra.run.vm10.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T21:26:06.481 INFO:teuthology.orchestra.run.vm10.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T21:26:06.481 INFO:teuthology.orchestra.run.vm10.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T21:26:06.481 INFO:teuthology.orchestra.run.vm10.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T21:26:06.481 INFO:teuthology.orchestra.run.vm10.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T21:26:06.481 INFO:teuthology.orchestra.run.vm10.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T21:26:06.481 INFO:teuthology.orchestra.run.vm10.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T21:26:06.481 INFO:teuthology.orchestra.run.vm10.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T21:26:06.481 INFO:teuthology.orchestra.run.vm10.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T21:26:06.481 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:26:06.496 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T21:26:06.496 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:26:06.530 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-09T21:26:06.616 INFO:teuthology.orchestra.run.vm07.stdout:Package 'librbd1' is not installed, so not removed 2026-03-09T21:26:06.616 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:26:06.616 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T21:26:06.616 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T21:26:06.616 INFO:teuthology.orchestra.run.vm07.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T21:26:06.616 INFO:teuthology.orchestra.run.vm07.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T21:26:06.617 INFO:teuthology.orchestra.run.vm07.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T21:26:06.617 INFO:teuthology.orchestra.run.vm07.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T21:26:06.617 INFO:teuthology.orchestra.run.vm07.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T21:26:06.617 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T21:26:06.617 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T21:26:06.617 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T21:26:06.617 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T21:26:06.617 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T21:26:06.617 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T21:26:06.617 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T21:26:06.617 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T21:26:06.617 INFO:teuthology.orchestra.run.vm07.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T21:26:06.617 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T21:26:06.617 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T21:26:06.617 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T21:26:06.618 INFO:teuthology.orchestra.run.vm07.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T21:26:06.618 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:26:06.645 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T21:26:06.646 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:26:06.678 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-09T21:26:06.679 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-09T21:26:06.683 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-09T21:26:06.848 INFO:teuthology.orchestra.run.vm10.stdout:Package 'rbd-fuse' is not installed, so not removed 2026-03-09T21:26:06.848 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:26:06.848 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T21:26:06.848 INFO:teuthology.orchestra.run.vm10.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T21:26:06.848 INFO:teuthology.orchestra.run.vm10.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T21:26:06.848 INFO:teuthology.orchestra.run.vm10.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T21:26:06.849 INFO:teuthology.orchestra.run.vm10.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T21:26:06.849 INFO:teuthology.orchestra.run.vm10.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T21:26:06.849 INFO:teuthology.orchestra.run.vm10.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T21:26:06.849 INFO:teuthology.orchestra.run.vm10.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T21:26:06.849 INFO:teuthology.orchestra.run.vm10.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T21:26:06.849 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T21:26:06.849 INFO:teuthology.orchestra.run.vm10.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T21:26:06.849 INFO:teuthology.orchestra.run.vm10.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T21:26:06.849 INFO:teuthology.orchestra.run.vm10.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T21:26:06.849 INFO:teuthology.orchestra.run.vm10.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T21:26:06.849 INFO:teuthology.orchestra.run.vm10.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T21:26:06.849 INFO:teuthology.orchestra.run.vm10.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T21:26:06.849 INFO:teuthology.orchestra.run.vm10.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T21:26:06.849 INFO:teuthology.orchestra.run.vm10.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T21:26:06.849 INFO:teuthology.orchestra.run.vm10.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T21:26:06.849 INFO:teuthology.orchestra.run.vm10.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T21:26:06.849 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:26:06.870 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T21:26:06.870 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:26:06.872 DEBUG:teuthology.orchestra.run.vm10:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-09T21:26:06.904 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-09T21:26:06.905 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-09T21:26:06.926 DEBUG:teuthology.orchestra.run.vm10:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-09T21:26:07.004 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-09T21:26:07.031 INFO:teuthology.orchestra.run.vm07.stdout:Package 'rbd-fuse' is not installed, so not removed 2026-03-09T21:26:07.031 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T21:26:07.031 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T21:26:07.031 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T21:26:07.031 INFO:teuthology.orchestra.run.vm07.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T21:26:07.031 INFO:teuthology.orchestra.run.vm07.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T21:26:07.031 INFO:teuthology.orchestra.run.vm07.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T21:26:07.032 INFO:teuthology.orchestra.run.vm07.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T21:26:07.032 INFO:teuthology.orchestra.run.vm07.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T21:26:07.032 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T21:26:07.032 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T21:26:07.032 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T21:26:07.032 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T21:26:07.032 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T21:26:07.032 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T21:26:07.032 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T21:26:07.032 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T21:26:07.032 INFO:teuthology.orchestra.run.vm07.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T21:26:07.032 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T21:26:07.032 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T21:26:07.032 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T21:26:07.032 INFO:teuthology.orchestra.run.vm07.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T21:26:07.032 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T21:26:07.047 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T21:26:07.047 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:26:07.048 DEBUG:teuthology.orchestra.run.vm07:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-09T21:26:07.108 DEBUG:teuthology.orchestra.run.vm07:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-09T21:26:07.188 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-09T21:26:07.218 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-09T21:26:07.219 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-09T21:26:07.318 INFO:teuthology.orchestra.run.vm10.stdout:The following packages will be REMOVED: 2026-03-09T21:26:07.318 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T21:26:07.318 INFO:teuthology.orchestra.run.vm10.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T21:26:07.318 INFO:teuthology.orchestra.run.vm10.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T21:26:07.318 INFO:teuthology.orchestra.run.vm10.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T21:26:07.318 INFO:teuthology.orchestra.run.vm10.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T21:26:07.318 INFO:teuthology.orchestra.run.vm10.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T21:26:07.318 INFO:teuthology.orchestra.run.vm10.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T21:26:07.318 INFO:teuthology.orchestra.run.vm10.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T21:26:07.318 INFO:teuthology.orchestra.run.vm10.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T21:26:07.318 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T21:26:07.318 INFO:teuthology.orchestra.run.vm10.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T21:26:07.318 INFO:teuthology.orchestra.run.vm10.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T21:26:07.318 INFO:teuthology.orchestra.run.vm10.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T21:26:07.318 INFO:teuthology.orchestra.run.vm10.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T21:26:07.318 INFO:teuthology.orchestra.run.vm10.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T21:26:07.318 INFO:teuthology.orchestra.run.vm10.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T21:26:07.318 INFO:teuthology.orchestra.run.vm10.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T21:26:07.318 INFO:teuthology.orchestra.run.vm10.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T21:26:07.318 INFO:teuthology.orchestra.run.vm10.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T21:26:07.318 INFO:teuthology.orchestra.run.vm10.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T21:26:07.374 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-09T21:26:07.374 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-09T21:26:07.482 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 87 to remove and 10 not upgraded. 2026-03-09T21:26:07.482 INFO:teuthology.orchestra.run.vm10.stdout:After this operation, 107 MB disk space will be freed. 2026-03-09T21:26:07.518 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-09T21:26:07.520 INFO:teuthology.orchestra.run.vm10.stdout:Removing ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:26:07.552 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-09T21:26:07.552 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T21:26:07.552 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T21:26:07.552 INFO:teuthology.orchestra.run.vm07.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T21:26:07.552 INFO:teuthology.orchestra.run.vm07.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T21:26:07.552 INFO:teuthology.orchestra.run.vm07.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T21:26:07.552 INFO:teuthology.orchestra.run.vm07.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T21:26:07.552 INFO:teuthology.orchestra.run.vm07.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T21:26:07.552 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T21:26:07.552 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T21:26:07.552 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T21:26:07.552 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T21:26:07.552 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T21:26:07.552 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T21:26:07.553 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T21:26:07.553 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T21:26:07.553 INFO:teuthology.orchestra.run.vm07.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T21:26:07.553 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T21:26:07.553 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T21:26:07.553 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T21:26:07.553 INFO:teuthology.orchestra.run.vm07.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T21:26:07.583 INFO:teuthology.orchestra.run.vm10.stdout:Removing jq (1.6-2.1ubuntu3.1) ... 2026-03-09T21:26:07.595 INFO:teuthology.orchestra.run.vm10.stdout:Removing kpartx (0.8.8-1ubuntu1.22.04.4) ... 2026-03-09T21:26:07.608 INFO:teuthology.orchestra.run.vm10.stdout:Removing libboost-iostreams1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-09T21:26:07.620 INFO:teuthology.orchestra.run.vm10.stdout:Removing libboost-thread1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-09T21:26:07.632 INFO:teuthology.orchestra.run.vm10.stdout:Removing libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T21:26:07.643 INFO:teuthology.orchestra.run.vm10.stdout:Removing libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T21:26:07.654 INFO:teuthology.orchestra.run.vm10.stdout:Removing libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T21:26:07.665 INFO:teuthology.orchestra.run.vm10.stdout:Removing libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T21:26:07.686 INFO:teuthology.orchestra.run.vm10.stdout:Removing libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T21:26:07.698 INFO:teuthology.orchestra.run.vm10.stdout:Removing libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T21:26:07.710 INFO:teuthology.orchestra.run.vm10.stdout:Removing libgfapi0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T21:26:07.723 INFO:teuthology.orchestra.run.vm10.stdout:Removing libgfrpc0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T21:26:07.724 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 87 to remove and 10 not upgraded. 2026-03-09T21:26:07.724 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 107 MB disk space will be freed. 2026-03-09T21:26:07.736 INFO:teuthology.orchestra.run.vm10.stdout:Removing libgfxdr0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T21:26:07.748 INFO:teuthology.orchestra.run.vm10.stdout:Removing libglusterfs0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T21:26:07.759 INFO:teuthology.orchestra.run.vm10.stdout:Removing libiscsi7:amd64 (1.19.0-3build2) ... 2026-03-09T21:26:07.765 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-09T21:26:07.767 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:26:07.770 INFO:teuthology.orchestra.run.vm10.stdout:Removing libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T21:26:07.781 INFO:teuthology.orchestra.run.vm10.stdout:Removing liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T21:26:07.784 INFO:teuthology.orchestra.run.vm07.stdout:Removing jq (1.6-2.1ubuntu3.1) ... 2026-03-09T21:26:07.793 INFO:teuthology.orchestra.run.vm10.stdout:Removing luarocks (3.8.0+dfsg1-1) ... 2026-03-09T21:26:07.798 INFO:teuthology.orchestra.run.vm07.stdout:Removing kpartx (0.8.8-1ubuntu1.22.04.4) ... 2026-03-09T21:26:07.811 INFO:teuthology.orchestra.run.vm07.stdout:Removing libboost-iostreams1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-09T21:26:07.821 INFO:teuthology.orchestra.run.vm10.stdout:Removing liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T21:26:07.826 INFO:teuthology.orchestra.run.vm07.stdout:Removing libboost-thread1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-09T21:26:07.834 INFO:teuthology.orchestra.run.vm10.stdout:Removing libnbd0 (1.10.5-1) ... 2026-03-09T21:26:07.838 INFO:teuthology.orchestra.run.vm07.stdout:Removing libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T21:26:07.846 INFO:teuthology.orchestra.run.vm10.stdout:Removing liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T21:26:07.850 INFO:teuthology.orchestra.run.vm07.stdout:Removing libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T21:26:07.858 INFO:teuthology.orchestra.run.vm10.stdout:Removing libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T21:26:07.861 INFO:teuthology.orchestra.run.vm07.stdout:Removing libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T21:26:07.870 INFO:teuthology.orchestra.run.vm10.stdout:Removing libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T21:26:07.872 INFO:teuthology.orchestra.run.vm07.stdout:Removing libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T21:26:07.881 INFO:teuthology.orchestra.run.vm10.stdout:Removing libpmemobj1:amd64 (1.11.1-3build1) ... 2026-03-09T21:26:07.892 INFO:teuthology.orchestra.run.vm07.stdout:Removing libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T21:26:07.893 INFO:teuthology.orchestra.run.vm10.stdout:Removing librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T21:26:07.902 INFO:teuthology.orchestra.run.vm07.stdout:Removing libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T21:26:07.905 INFO:teuthology.orchestra.run.vm10.stdout:Removing libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T21:26:07.914 INFO:teuthology.orchestra.run.vm07.stdout:Removing libgfapi0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T21:26:07.917 INFO:teuthology.orchestra.run.vm10.stdout:Removing sg3-utils-udev (1.46-1ubuntu0.22.04.1) ... 2026-03-09T21:26:07.925 INFO:teuthology.orchestra.run.vm07.stdout:Removing libgfrpc0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T21:26:07.927 INFO:teuthology.orchestra.run.vm10.stdout:update-initramfs: deferring update (trigger activated) 2026-03-09T21:26:07.936 INFO:teuthology.orchestra.run.vm07.stdout:Removing libgfxdr0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T21:26:07.939 INFO:teuthology.orchestra.run.vm10.stdout:Removing sg3-utils (1.46-1ubuntu0.22.04.1) ... 2026-03-09T21:26:07.947 INFO:teuthology.orchestra.run.vm07.stdout:Removing libglusterfs0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T21:26:07.957 INFO:teuthology.orchestra.run.vm10.stdout:Removing libsgutils2-2:amd64 (1.46-1ubuntu0.22.04.1) ... 2026-03-09T21:26:07.959 INFO:teuthology.orchestra.run.vm07.stdout:Removing libiscsi7:amd64 (1.19.0-3build2) ... 2026-03-09T21:26:07.970 INFO:teuthology.orchestra.run.vm10.stdout:Removing lua-any (27ubuntu1) ... 2026-03-09T21:26:07.971 INFO:teuthology.orchestra.run.vm07.stdout:Removing libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T21:26:07.981 INFO:teuthology.orchestra.run.vm10.stdout:Removing lua-sec:amd64 (1.0.2-1) ... 2026-03-09T21:26:07.982 INFO:teuthology.orchestra.run.vm07.stdout:Removing liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T21:26:07.994 INFO:teuthology.orchestra.run.vm10.stdout:Removing lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T21:26:07.995 INFO:teuthology.orchestra.run.vm07.stdout:Removing luarocks (3.8.0+dfsg1-1) ... 2026-03-09T21:26:08.011 INFO:teuthology.orchestra.run.vm10.stdout:Removing lua5.1 (5.1.5-8.1build4) ... 2026-03-09T21:26:08.023 INFO:teuthology.orchestra.run.vm07.stdout:Removing liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T21:26:08.031 INFO:teuthology.orchestra.run.vm10.stdout:Removing nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T21:26:08.034 INFO:teuthology.orchestra.run.vm07.stdout:Removing libnbd0 (1.10.5-1) ... 2026-03-09T21:26:08.045 INFO:teuthology.orchestra.run.vm07.stdout:Removing liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T21:26:08.057 INFO:teuthology.orchestra.run.vm07.stdout:Removing libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T21:26:08.068 INFO:teuthology.orchestra.run.vm07.stdout:Removing libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T21:26:08.079 INFO:teuthology.orchestra.run.vm07.stdout:Removing libpmemobj1:amd64 (1.11.1-3build1) ... 2026-03-09T21:26:08.091 INFO:teuthology.orchestra.run.vm07.stdout:Removing librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T21:26:08.103 INFO:teuthology.orchestra.run.vm07.stdout:Removing libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T21:26:08.115 INFO:teuthology.orchestra.run.vm07.stdout:Removing sg3-utils-udev (1.46-1ubuntu0.22.04.1) ... 2026-03-09T21:26:08.122 INFO:teuthology.orchestra.run.vm07.stdout:update-initramfs: deferring update (trigger activated) 2026-03-09T21:26:08.132 INFO:teuthology.orchestra.run.vm07.stdout:Removing sg3-utils (1.46-1ubuntu0.22.04.1) ... 2026-03-09T21:26:08.150 INFO:teuthology.orchestra.run.vm07.stdout:Removing libsgutils2-2:amd64 (1.46-1ubuntu0.22.04.1) ... 2026-03-09T21:26:08.162 INFO:teuthology.orchestra.run.vm07.stdout:Removing lua-any (27ubuntu1) ... 2026-03-09T21:26:08.173 INFO:teuthology.orchestra.run.vm07.stdout:Removing lua-sec:amd64 (1.0.2-1) ... 2026-03-09T21:26:08.186 INFO:teuthology.orchestra.run.vm07.stdout:Removing lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T21:26:08.200 INFO:teuthology.orchestra.run.vm07.stdout:Removing lua5.1 (5.1.5-8.1build4) ... 2026-03-09T21:26:08.217 INFO:teuthology.orchestra.run.vm07.stdout:Removing nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T21:26:08.515 INFO:teuthology.orchestra.run.vm10.stdout:Removing pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T21:26:08.548 INFO:teuthology.orchestra.run.vm10.stdout:Removing python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T21:26:08.576 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T21:26:08.635 INFO:teuthology.orchestra.run.vm07.stdout:Removing pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T21:26:08.636 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-webtest (2.0.35-1) ... 2026-03-09T21:26:08.667 INFO:teuthology.orchestra.run.vm07.stdout:Removing python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T21:26:08.685 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-pastescript (2.0.2-4) ... 2026-03-09T21:26:08.694 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T21:26:08.738 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-pastedeploy (2.1.1-1) ... 2026-03-09T21:26:08.753 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-webtest (2.0.35-1) ... 2026-03-09T21:26:08.786 INFO:teuthology.orchestra.run.vm10.stdout:Removing python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T21:26:08.797 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T21:26:08.804 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-pastescript (2.0.2-4) ... 2026-03-09T21:26:08.854 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T21:26:08.863 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-pastedeploy (2.1.1-1) ... 2026-03-09T21:26:08.913 INFO:teuthology.orchestra.run.vm07.stdout:Removing python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T21:26:08.926 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T21:26:08.995 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T21:26:09.124 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-google-auth (1.5.1-3) ... 2026-03-09T21:26:09.191 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-cachetools (5.0.0-1) ... 2026-03-09T21:26:09.239 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:26:09.279 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-google-auth (1.5.1-3) ... 2026-03-09T21:26:09.286 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:26:09.337 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-cachetools (5.0.0-1) ... 2026-03-09T21:26:09.342 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-cherrypy3 (18.6.1-4) ... 2026-03-09T21:26:09.387 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:26:09.403 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T21:26:09.438 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T21:26:09.458 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-jaraco.collections (3.4.0-2) ... 2026-03-09T21:26:09.491 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-cherrypy3 (18.6.1-4) ... 2026-03-09T21:26:09.507 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-jaraco.classes (3.2.1-3) ... 2026-03-09T21:26:09.555 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T21:26:09.558 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-portend (3.0.0-1) ... 2026-03-09T21:26:09.676 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-jaraco.collections (3.4.0-2) ... 2026-03-09T21:26:09.677 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-tempora (4.1.2-1) ... 2026-03-09T21:26:09.827 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-jaraco.classes (3.2.1-3) ... 2026-03-09T21:26:09.829 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-jaraco.text (3.6.0-2) ... 2026-03-09T21:26:09.933 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-portend (3.0.0-1) ... 2026-03-09T21:26:09.933 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-jaraco.functools (3.4.0-2) ... 2026-03-09T21:26:10.074 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-tempora (4.1.2-1) ... 2026-03-09T21:26:10.074 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T21:26:10.122 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-jaraco.text (3.6.0-2) ... 2026-03-09T21:26:10.171 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-jaraco.functools (3.4.0-2) ... 2026-03-09T21:26:10.199 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T21:26:10.241 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T21:26:10.260 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-logutils (0.3.3-8) ... 2026-03-09T21:26:10.319 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T21:26:10.367 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T21:26:10.372 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-natsort (8.0.2-1) ... 2026-03-09T21:26:10.422 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T21:26:10.433 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-logutils (0.3.3-8) ... 2026-03-09T21:26:10.482 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T21:26:10.484 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-prettytable (2.5.0-2) ... 2026-03-09T21:26:10.535 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-natsort (8.0.2-1) ... 2026-03-09T21:26:10.538 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-psutil (5.9.0-1build1) ... 2026-03-09T21:26:10.585 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T21:26:10.593 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-pyinotify (0.9.6-1.3) ... 2026-03-09T21:26:10.641 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T21:26:10.647 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-prettytable (2.5.0-2) ... 2026-03-09T21:26:10.691 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-repoze.lru (0.7-2) ... 2026-03-09T21:26:10.696 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-psutil (5.9.0-1build1) ... 2026-03-09T21:26:10.744 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T21:26:10.755 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-pyinotify (0.9.6-1.3) ... 2026-03-09T21:26:10.797 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-rsa (4.8-1) ... 2026-03-09T21:26:10.810 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T21:26:10.850 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-simplegeneric (0.8.1-3) ... 2026-03-09T21:26:10.865 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-repoze.lru (0.7-2) ... 2026-03-09T21:26:10.901 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-simplejson (3.17.6-1build1) ... 2026-03-09T21:26:10.916 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T21:26:10.958 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-singledispatch (3.4.0.3-3) ... 2026-03-09T21:26:10.970 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-rsa (4.8-1) ... 2026-03-09T21:26:11.008 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T21:26:11.025 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-simplegeneric (0.8.1-3) ... 2026-03-09T21:26:11.035 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T21:26:11.075 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-simplejson (3.17.6-1build1) ... 2026-03-09T21:26:11.083 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-threadpoolctl (3.1.0-1) ... 2026-03-09T21:26:11.130 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T21:26:11.134 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-singledispatch (3.4.0.3-3) ... 2026-03-09T21:26:11.182 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T21:26:11.188 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T21:26:11.219 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T21:26:11.230 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T21:26:11.271 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-threadpoolctl (3.1.0-1) ... 2026-03-09T21:26:11.279 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-websocket (1.2.3-1) ... 2026-03-09T21:26:11.318 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T21:26:11.330 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T21:26:11.369 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T21:26:11.383 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-zc.lockfile (2.0-1) ... 2026-03-09T21:26:11.416 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T21:26:11.430 INFO:teuthology.orchestra.run.vm10.stdout:Removing qttranslations5-l10n (5.15.3-1) ... 2026-03-09T21:26:11.454 INFO:teuthology.orchestra.run.vm10.stdout:Removing smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T21:26:11.467 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-websocket (1.2.3-1) ... 2026-03-09T21:26:11.521 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T21:26:11.573 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-zc.lockfile (2.0-1) ... 2026-03-09T21:26:11.622 INFO:teuthology.orchestra.run.vm07.stdout:Removing qttranslations5-l10n (5.15.3-1) ... 2026-03-09T21:26:11.644 INFO:teuthology.orchestra.run.vm07.stdout:Removing smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T21:26:11.895 INFO:teuthology.orchestra.run.vm10.stdout:Removing socat (1.7.4.1-3ubuntu4) ... 2026-03-09T21:26:11.907 INFO:teuthology.orchestra.run.vm10.stdout:Removing unzip (6.0-26ubuntu3.2) ... 2026-03-09T21:26:11.928 INFO:teuthology.orchestra.run.vm10.stdout:Removing xmlstarlet (1.6.1-2.1) ... 2026-03-09T21:26:11.946 INFO:teuthology.orchestra.run.vm10.stdout:Removing zip (3.0-12build2) ... 2026-03-09T21:26:11.972 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T21:26:11.983 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T21:26:12.049 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-09T21:26:12.058 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for initramfs-tools (0.140ubuntu13.5) ... 2026-03-09T21:26:12.078 INFO:teuthology.orchestra.run.vm10.stdout:update-initramfs: Generating /boot/initrd.img-5.15.0-1092-kvm 2026-03-09T21:26:12.085 INFO:teuthology.orchestra.run.vm07.stdout:Removing socat (1.7.4.1-3ubuntu4) ... 2026-03-09T21:26:12.102 INFO:teuthology.orchestra.run.vm07.stdout:Removing unzip (6.0-26ubuntu3.2) ... 2026-03-09T21:26:12.123 INFO:teuthology.orchestra.run.vm07.stdout:Removing xmlstarlet (1.6.1-2.1) ... 2026-03-09T21:26:12.142 INFO:teuthology.orchestra.run.vm07.stdout:Removing zip (3.0-12build2) ... 2026-03-09T21:26:12.172 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T21:26:12.183 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T21:26:12.229 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-09T21:26:12.236 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for initramfs-tools (0.140ubuntu13.5) ... 2026-03-09T21:26:12.253 INFO:teuthology.orchestra.run.vm07.stdout:update-initramfs: Generating /boot/initrd.img-5.15.0-1092-kvm 2026-03-09T21:26:13.740 INFO:teuthology.orchestra.run.vm10.stdout:W: mkconf: MD subsystem is not loaded, thus I cannot scan for arrays. 2026-03-09T21:26:13.741 INFO:teuthology.orchestra.run.vm10.stdout:W: mdadm: failed to auto-generate temporary mdadm.conf file. 2026-03-09T21:26:13.902 INFO:teuthology.orchestra.run.vm07.stdout:W: mkconf: MD subsystem is not loaded, thus I cannot scan for arrays. 2026-03-09T21:26:13.903 INFO:teuthology.orchestra.run.vm07.stdout:W: mdadm: failed to auto-generate temporary mdadm.conf file. 2026-03-09T21:26:16.160 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:26:16.164 DEBUG:teuthology.parallel:result is None 2026-03-09T21:26:16.252 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T21:26:16.255 DEBUG:teuthology.parallel:result is None 2026-03-09T21:26:16.255 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm07.local 2026-03-09T21:26:16.255 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm10.local 2026-03-09T21:26:16.255 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f /etc/apt/sources.list.d/ceph.list 2026-03-09T21:26:16.255 DEBUG:teuthology.orchestra.run.vm10:> sudo rm -f /etc/apt/sources.list.d/ceph.list 2026-03-09T21:26:16.263 DEBUG:teuthology.orchestra.run.vm10:> sudo apt-get update 2026-03-09T21:26:16.304 DEBUG:teuthology.orchestra.run.vm07:> sudo apt-get update 2026-03-09T21:26:16.453 INFO:teuthology.orchestra.run.vm10.stdout:Hit:1 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-09T21:26:16.495 INFO:teuthology.orchestra.run.vm07.stdout:Hit:1 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-09T21:26:16.529 INFO:teuthology.orchestra.run.vm07.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-09T21:26:16.535 INFO:teuthology.orchestra.run.vm10.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-09T21:26:16.536 INFO:teuthology.orchestra.run.vm07.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-09T21:26:16.543 INFO:teuthology.orchestra.run.vm10.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-09T21:26:16.543 INFO:teuthology.orchestra.run.vm07.stdout:Hit:4 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-09T21:26:16.551 INFO:teuthology.orchestra.run.vm10.stdout:Hit:4 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-09T21:26:17.651 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-09T21:26:17.661 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-09T21:26:17.667 DEBUG:teuthology.parallel:result is None 2026-03-09T21:26:17.674 DEBUG:teuthology.parallel:result is None 2026-03-09T21:26:17.674 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-09T21:26:17.676 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-09T21:26:17.676 DEBUG:teuthology.orchestra.run.vm07:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T21:26:17.678 DEBUG:teuthology.orchestra.run.vm10:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T21:26:17.906 INFO:teuthology.orchestra.run.vm07.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T21:26:17.906 INFO:teuthology.orchestra.run.vm07.stdout:============================================================================== 2026-03-09T21:26:17.906 INFO:teuthology.orchestra.run.vm07.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T21:26:17.906 INFO:teuthology.orchestra.run.vm07.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T21:26:17.906 INFO:teuthology.orchestra.run.vm07.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T21:26:17.906 INFO:teuthology.orchestra.run.vm07.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T21:26:17.906 INFO:teuthology.orchestra.run.vm07.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T21:26:17.906 INFO:teuthology.orchestra.run.vm07.stdout:-de.relay.mahi.b 232.208.203.34 3 u 36 128 377 21.090 -0.949 0.216 2026-03-09T21:26:17.906 INFO:teuthology.orchestra.run.vm07.stdout:*server1a.sim720 193.67.79.202 2 u 33 128 377 24.991 -0.345 0.184 2026-03-09T21:26:17.906 INFO:teuthology.orchestra.run.vm07.stdout:+node-1.infogral 168.239.11.197 2 u 40 128 377 23.573 -0.271 0.136 2026-03-09T21:26:17.906 INFO:teuthology.orchestra.run.vm07.stdout:-netcup02.therav 189.97.54.122 2 u 34 128 377 28.680 -2.999 0.216 2026-03-09T21:26:17.906 INFO:teuthology.orchestra.run.vm07.stdout:+cp.hypermediaa. 189.97.54.122 2 u 33 128 377 25.018 -0.569 0.134 2026-03-09T21:26:17.906 INFO:teuthology.orchestra.run.vm07.stdout:-lb01.leardev.de 192.53.103.108 2 u 94 128 377 25.829 +0.179 0.307 2026-03-09T21:26:17.906 INFO:teuthology.orchestra.run.vm07.stdout:-vps-nue1.orlean 195.145.119.188 2 u 34 128 377 28.278 -2.636 0.288 2026-03-09T21:26:17.906 INFO:teuthology.orchestra.run.vm07.stdout:-stratum2-3.NTP. 129.70.137.82 2 u 34 128 377 30.688 -1.690 0.183 2026-03-09T21:26:18.000 INFO:teuthology.orchestra.run.vm10.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T21:26:18.000 INFO:teuthology.orchestra.run.vm10.stdout:============================================================================== 2026-03-09T21:26:18.000 INFO:teuthology.orchestra.run.vm10.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T21:26:18.000 INFO:teuthology.orchestra.run.vm10.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T21:26:18.000 INFO:teuthology.orchestra.run.vm10.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T21:26:18.000 INFO:teuthology.orchestra.run.vm10.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T21:26:18.000 INFO:teuthology.orchestra.run.vm10.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T21:26:18.000 INFO:teuthology.orchestra.run.vm10.stdout:*node-1.infogral 168.239.11.197 2 u 28 64 377 23.556 -0.037 0.133 2026-03-09T21:26:18.000 INFO:teuthology.orchestra.run.vm10.stdout:+server1a.sim720 193.67.79.202 2 u 33 64 377 25.089 -0.068 0.061 2026-03-09T21:26:18.000 INFO:teuthology.orchestra.run.vm10.stdout:-lb01.leardev.de 192.53.103.108 2 u 25 64 377 25.883 +0.349 0.324 2026-03-09T21:26:18.000 INFO:teuthology.orchestra.run.vm10.stdout:-static.215.156. 35.73.197.144 2 u 28 64 377 23.487 +0.338 0.036 2026-03-09T21:26:18.000 INFO:teuthology.orchestra.run.vm10.stdout:-de.relay.mahi.b 232.208.203.34 3 u 100 128 377 21.125 -0.555 0.064 2026-03-09T21:26:18.000 INFO:teuthology.orchestra.run.vm10.stdout:+cp.hypermediaa. 189.97.54.122 2 u 20 64 377 25.037 -0.228 0.079 2026-03-09T21:26:18.000 INFO:teuthology.orchestra.run.vm10.stdout:-ctb01.martinmoe 87.63.200.138 2 u 26 64 377 31.712 +0.278 0.287 2026-03-09T21:26:18.000 INFO:teuthology.orchestra.run.vm10.stdout:-185.125.190.58 145.238.80.80 2 u 52 64 377 32.080 +0.658 0.279 2026-03-09T21:26:18.001 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-09T21:26:18.003 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-09T21:26:18.003 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-09T21:26:18.006 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-09T21:26:18.008 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-09T21:26:18.010 INFO:teuthology.task.internal:Duration was 1281.696551 seconds 2026-03-09T21:26:18.011 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-09T21:26:18.013 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-09T21:26:18.013 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-09T21:26:18.015 DEBUG:teuthology.orchestra.run.vm10:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-09T21:26:18.044 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-09T21:26:18.044 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm07.local 2026-03-09T21:26:18.044 DEBUG:teuthology.orchestra.run.vm07:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-09T21:26:18.098 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm10.local 2026-03-09T21:26:18.098 DEBUG:teuthology.orchestra.run.vm10:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-09T21:26:18.111 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-09T21:26:18.112 DEBUG:teuthology.orchestra.run.vm07:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T21:26:18.142 DEBUG:teuthology.orchestra.run.vm10:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T21:26:18.229 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-09T21:26:18.230 DEBUG:teuthology.orchestra.run.vm07:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T21:26:18.230 DEBUG:teuthology.orchestra.run.vm10:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T21:26:18.237 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T21:26:18.237 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T21:26:18.237 INFO:teuthology.orchestra.run.vm07.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-09T21:26:18.237 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T21:26:18.237 INFO:teuthology.orchestra.run.vm07.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0%/home/ubuntu/cephtest/archive/syslog/journalctl.log: -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-09T21:26:18.239 INFO:teuthology.orchestra.run.vm10.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T21:26:18.239 INFO:teuthology.orchestra.run.vm10.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T21:26:18.239 INFO:teuthology.orchestra.run.vm10.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-09T21:26:18.239 INFO:teuthology.orchestra.run.vm10.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T21:26:18.240 INFO:teuthology.orchestra.run.vm10.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: /home/ubuntu/cephtest/archive/syslog/journalctl.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-09T21:26:18.254 INFO:teuthology.orchestra.run.vm10.stderr: 90.5% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-09T21:26:18.255 INFO:teuthology.orchestra.run.vm07.stderr: 92.8% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-09T21:26:18.256 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-09T21:26:18.259 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-09T21:26:18.259 DEBUG:teuthology.orchestra.run.vm07:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-09T21:26:18.309 DEBUG:teuthology.orchestra.run.vm10:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-09T21:26:18.318 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-09T21:26:18.321 DEBUG:teuthology.orchestra.run.vm07:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-09T21:26:18.354 DEBUG:teuthology.orchestra.run.vm10:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-09T21:26:18.361 INFO:teuthology.orchestra.run.vm07.stdout:kernel.core_pattern = core 2026-03-09T21:26:18.368 INFO:teuthology.orchestra.run.vm10.stdout:kernel.core_pattern = core 2026-03-09T21:26:18.377 DEBUG:teuthology.orchestra.run.vm07:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-09T21:26:18.415 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T21:26:18.416 DEBUG:teuthology.orchestra.run.vm10:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-09T21:26:18.420 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T21:26:18.420 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-09T21:26:18.422 INFO:teuthology.task.internal:Transferring archived files... 2026-03-09T21:26:18.423 DEBUG:teuthology.misc:Transferring archived files from vm07:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/658/remote/vm07 2026-03-09T21:26:18.423 DEBUG:teuthology.orchestra.run.vm07:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-09T21:26:18.467 DEBUG:teuthology.misc:Transferring archived files from vm10:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/658/remote/vm10 2026-03-09T21:26:18.467 DEBUG:teuthology.orchestra.run.vm10:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-09T21:26:18.476 INFO:teuthology.task.internal:Removing archive directory... 2026-03-09T21:26:18.476 DEBUG:teuthology.orchestra.run.vm07:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-09T21:26:18.509 DEBUG:teuthology.orchestra.run.vm10:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-09T21:26:18.521 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-09T21:26:18.524 INFO:teuthology.task.internal:Not uploading archives. 2026-03-09T21:26:18.524 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-09T21:26:18.526 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-09T21:26:18.527 DEBUG:teuthology.orchestra.run.vm07:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-09T21:26:18.558 DEBUG:teuthology.orchestra.run.vm10:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-09T21:26:18.560 INFO:teuthology.orchestra.run.vm07.stdout: 258079 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 9 21:26 /home/ubuntu/cephtest 2026-03-09T21:26:18.565 INFO:teuthology.orchestra.run.vm10.stdout: 258077 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 9 21:26 /home/ubuntu/cephtest 2026-03-09T21:26:18.566 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-09T21:26:18.572 INFO:teuthology.run:Summary data: description: orch/cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} duration: 1281.6965510845184 flavor: default owner: kyr success: true 2026-03-09T21:26:18.572 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-09T21:26:18.597 INFO:teuthology.run:pass